Impact of UC as a Service in the Business World

Unified-Communications-ICONUnified Communications (UC) is the integration of real-time and non-real time communication to provide instant and on-demand services to customers. During the last few years, UC has seen enormous growth due to the increasing use of UC by businesses around the world. A report by Frost & Sullivan (News – Alert) shows that UC is likely to grow at an annual rate of 32.7 percent per year through 2017 in North America alone. It is being used for many applications such as hosted services, voice calls and other applications.

In the light of this growth, some companies are evaluating whether they need UC for their business at all. The answer is a resounding yes, because UC brings many benefits to companies. Firstly, UC streamlines all communication channels within an organization. Without UC, businesses have to employ people around the clock to interact with customers over the phone or face to face. This is time-consuming and effort-centric and in most companies, it is impossible to implement. The difference in time zones also further complicates the communication process. This is why it is important to have a unified communication model where all the external communications with clients is streamlined.

Statistics by business2community.com show that 73 percent of business related calls go to voice mail. If the mailbox is full, then it is the business that is affected by the inefficiency because customers will tend to move on to competitors. To avoid such shifts in customer loyalty and to strengthen the relationship with customers, it is important for businesses to have a sound UC model.

It is equally important to choose the right service from a service provider who is willing to offer UC as a service. Today, many companies in the UC industry offer only the hardware or the infrastructure and not UC as a service. Going forward, every business is likely to use a combination of UC models because of the flexibility and convenience that comes with it. For this reason, businesses should enter into agreements with service providers who offer it as a service.

Thanks to Unified Communications for the article.

Advertisements

Five Steps to Building Total Visibility and Control of your Cloud Infrastructure using StableNet®

Modern enterprise applications are engineered for agility and are heavily virtualized for frequent deployment over scalable IT infrastructure environments. The benefits of virtualized public\private cloud environments include flexibility, efficiency, and agile business enablement. Multiple factors such as varying workload requirements, or ‘just-in‐time’ provisioning require an accurate and scalable muti-functional management suite to mitigate the risk of compromising the environment. Infosim’s StableNet® unified management system provides the total visibility and control required for your cloud hosting infrastructure environment.

Step 1 – WAN Access to your Cloud Environment

The Wide Area Network (WAN) is a key component in the delivery path of your cloud services. It is essential that the WAN routing devices that allow your public and private cloud‐based traffic to flow in\out of your hosting Infosim Public Private Cloudenvironment have the appropriate management array of functionality in place to deliver the service visibility and application performance needed for measuring the service availability and ultimately the customer experience. Optimization of the WAN is a necessity for application prioritization and acceleration delivery; therefore, maximum visibility and control of the WAN access points to and from your cloud environment are absolutely paramount for maintaining high service availability.

StableNet® is a unified management solution and is therefore ideally suited for cloud infrastructure management. For the WAN element of your cloud hosting environment StableNet® provides an array of multi‐functional capabilities that deliver the complete wrap required to monitor, police, and control your environment proactively. For the WAN infrastructure this includes:

Infosim Vendor Management Systems

  • Inventory Management (e.g. Device types, cards, serial numbers etc.)
  • Topological Management (e.g. Network Topology Visualization)
  • Configuration Management (e.g. Config Backup\Restoration)
  • Policy Management (e.g. Configuration Policy & Governance Control)
  • Change Control (e.g. Configuration Lock-­‐Down Change Control)
  • Performance Management (e.g. Bandwidth, QOS, System, Interface Performance with Threshold Management)
  • Fault Management with RCA (e.g. Fault Correlation Management with unique Root‐Cause‐Analysis)
  • Service Visualization (e.g. Visualize your entire cloud service environment)

Having this level of management control is required for a truly complete proactive management service for your private and public cloud WAN access. With traditional or legacy element management systems, costs can escalate when having to procure and support this number of functional management systems, which ultimately increases in CAPEX and OPEX.

However, StableNet® is a unified management system with a wealth of integrated functionality that therefore makes it extremely cost effective, driving down CAPEX and OPEX requirements while differentiating your management capabilities and keeping your operating costs to a minimum.

Step 2 – LAN Infrastructure Interconnectivity

The Local Area Network (LAN) is a critical element for network traffic delivery within the cloud hosting or datacenter. Your entire WAN access and cloud server hosting platform environment will have been interconnected to Infosim Public Private Cloud 2your physical and virtual LAN switching network infrastructure. The LAN switching element is the backbone of your cloud hosting environment so any problems within this area will seriously affect your customers and partners IT services. Implementation of a management system that has a complete proactive monitoring solution will reduce the risk of potentially damaging service outages.

As stated in the previous section, StableNet® is a unified management solution and is therefore ideally suited for cloud infrastructure management. For the LAN element of your cloud hosting environment StableNet® provides an array of multi‐functional capabilities that provide the complete wrap required to monitor, police, and control your LAN environment proactively as well as visualize the entire hosting environment from a single view detailing all LAN interconnection to both the WAN access points and the server hosting systems. The LAN infrastructure includes:

Infosim Vendor Management Systems 2

  • Inventory Management (e.g. Device types, cards, serial Nos etc.)
  • Topological Management (e.g. Network Topology Visualization)
  • Configuration Management (e.g. Config Backup\Restoration)
  • Policy Management (e.g. Configuration Policy & Governance Control)
  • Change Control (e.g. Configuration Lock-­‐Down Change Control)
  • Performance Management (e.g. Bandwidth, QOS, System, Interface Performance, with Threshold Management)
  • Fault Management with RCA (e.g. Fault Correlation Management with unique Root‐Cause‐Analysis)
  • Service Visualization (e.g. Visualize your entire cloud service environment)
  • 802.1q Trunk Monitoring (e.g. Monitoring of critical LAN trunk interconnects)
  • SPAN\RSPAN Monitoring (e.g. Monitoring of SPAN\RSPAN)

Step 3 – The Secure Infrastructure Environment

The security infrastructure surrounding the hosted WAN and LAN environment will typically consist of firewalls, intrusion detection (IDS/IPS), logging analytical systems (SIEM\SEM, etc.), and appliances. While many of these systems have specific vendor element managers for the configuration, analytics, and reporting, they will also support standard SNMP (Simple Network Management Protocol), system event and alarm logging functionalities, among other capabilities. StableNet® is vendor agnostic and supports many of these appliance types (F5, Cisco, Juniper, Checkpoint, etc.). It is therefore very important these devices are managed in a way whereby the physical and virtual elements are visualized within the service topology, and the performance and change management controls are policed around threshold, event and alarm management. Management and policing of specific configuration criteria, for example Rule-Set policies, are crucial for maintaining proactive service availability. StableNet® functionality also extends to extensive logging and analytical reporting capabilities which, can be deployed in high‐availability to maximize resilience and redundancy.

Step 4 – The Physical & Virtual Hosting Platforms

The physical and virtual platforms are controlled and built via the automation & provisioning tools. Management of the hosting systems, both physical and virtual, their associated operating systems (e.g. Windows, Linux, StableNet Integrated Data BusSolaris etc.), and physical and virtual network interconnections into the LAN hosting infrastructure will also need to be configured for performance, threshold event and alarm management, with specific service KPIs and SLAs. Visualization of both the physical and virtual environments together with the real‐time performance is critical for proactive management. Therefore, speed of onboarding the management wrap around the virtualized platforms needs to be engaged during the automated build process.

StableNet’s® unified management system provides the complete array of functional capabilities required for monitoring and managing the hosting platforms within a cloud environment. More importantly it can also be integrated with a plethora of provisioning and automation tools that make it very attractive in this space due to enriched capabilities. The StableNet® API provides seamless integration to and from other functional tools so as systems are built the provisioning of the full proactive management capability is provisioned to your service assurance criteria.

Step 5 – Application Performance Monitoring

StableNet APM Pie ChartThe performance of the applications your customers are using is crucial to your customer experience, as it’s the experience of the service that is paramount to the success of your business. Being able to measure the performance of applications, used by both public and private cloud customers, and provide monitoring dashboards and reporting of the service performance is a standard requirement for any cloud service provider.

APM products in the market place today have a wealth of functionality that brings additional complexities and ultimately results in a costly operating model. StableNet’s® APM functionality is already integrated with other enriched capabilities that make troubleshooting and identification of root cause so much easier from a single product.

Many applications today are browser, or Web‐based, applications. StableNet® has the ability to configure pre-defined scripts that can interact with a Web‐based application and perform a suite of metric tests that are collected and analyzed for specific performance‐related, and problem determination monitoring and reporting requirements.

  • Web‐based Application MonitoringStableNet Service Experience
    • URL\Web Page Availability and Response Monitoring
    • Historical Performance
    • Performance Trending
    • Performance Analysis

Your business customers rely on Web‐based applications to generate sales, present market research, or provide services. The complexity of these applications means they are susceptible to performance issues and failures; therefore, it is imperative from the user experience that the application functionality be monitored in a way where the deployment of transactional script-­‐based monitoring enables:

  • Performance Monitoring of each Application Transaction Step
  • Identify Application Performance Congestion and failures
  • Optimize User Experience
  • Root-­‐Cause-Analysis through Event & Alarm Notification Management

Deployment of APM will drive down your MTTR (Mean-Time-to-Repair) and provide you the comfort of knowing your customers are experiencing great service application performance. It will also provide you with the confidence to offer customer application performance-based SLAs (Service Level Agreements) that will drive existing customer sales and increase your customer base.

StableNet Sys LogConclusion

There are many aspects to managing a cloud hosting environment. Stacking up individual element managers to perform specific functions for your OSS (Operational Support Systems) requirements is both costly in terms of capital expenditure outlay; for example, platform, software, annual software support for each element management system, and operational expenditure in terms of increased headcount and tool specialization for each system. The strategy for the 21st century is to deploy unified management solutions, whereby you maximize the functionality capabilities from a single system, thus driving down both your capital and operational expense. More importantly though is the benefit it provides. A unified management platform is a suite of multi-functional capabilities in a single product that therefore has seamless integration and cross-correlation built-in, enabling what we class as True-Visualization of your entire End-to-End cloud hosting environment.

Thanks to Infosim for the article.

xBalancer Helps a Global Cloud Provider Maintain Uninterrupted Security and Streamline Traffic Monitoring While Scaling to 10G

Company relies on xBalancer to load balance and prevent overburdening of monitoring tools, while enabling tools to be added quickly and cost-efficiently in an intensive, high-demand environment

Net Optics xBalancerGlobal Cloud Provider’s Upgrade Triggers the Urgent Need for Load-Balancing Net Optics xBalancer Customer Testimonial

The momentum toward 10G network links brings major challenges; in particular, monitoring critical traffic thoroughly in a high-speed, high-volume environment and avoiding the risk of overburdening vital tools. Companies prevent this overburdening by distributing the traffic load to multiple tools.

The Security Group for this well-known Global Cloud Provider (“The Provider”) uses three different types of out-of-band security tools, such as Intrusion Detection Systems (IDSs), to monitor a one-Gigabit Internet link coming into the company. (The application is for the corporate network and not any customer-facing service.) The security team employs three tool types from three different vendors, because each tool contributes its own unique strengths and enables more thorough security coverage.

A Sharp Traffic Increase On the Link Brings Monitoring Concerns

The Company was understandably worried that its current tools wouldn’t be able to handle the increased traffic caused by upgrading from 1G to 10G. Such intensive pressures threaten to overwhelm and limit the effectiveness of the very devices that carry out the monitoring. Unable to keep up with increasing loads, these overburdened tools can put service-level agreements at risk and expose the network to threats. To deal with this problem, a company may have to invest in costly new tools that are engineered for the 10G environment. They were facing another challenge. While they upgrade the network to 10G and using 2 x 1G devices, each device was suffering from Micro bursts. So distributing the traffic with xBalancer helped to face this problem as well.

Fortunately, during this initial phase of the upgrade, the Company’s security team determined that two tools have enough bandwidth to handle anticipated traffic. Since the link will have less than 2 Gbps for a while, the company wishes to load balance traffic to two tools, and to perform that three times—once for each of the different tools.

The advantages of replicating 1G tools already at work in the IT environment are evident:

This ingenious, low-cost, efficient approach optimizes existing resources by allowing multiple 1G tools to share the load caused by processing high traffic volumes, while leveraging existing processes and operator training.

xBalancer Is the Ideal Solution for Distributing Traffic In the 10G Arena

The versatile Net Optics xBalancer™ is purpose-built to share security devices across multiple links, offering 24 SFP+ ports and integrated data rate conversion in a 1U formfactor. It aggregates traffic from multiple 1G and 10G links and distributes it to 1G or 10G tools. xBalancer enables two or more appliances to be deployed in parallel with traffic balanced between them—from 10G links to multiple 1G tools—in either inline or out-ofband topologies.

With xBalancer, even the heaviest traffic loads sail through to IPSs and traffic recorders in the 10G data center. xBalancer’s innovative engineering enables it to distribute traffic to all manner of monitoring tools, including:

  • IPSs
  • Firewalls
  • Traffic recorders
  • Web accelerators
  • Application Performance Management devices
  • Intrusion Detection Systems
  • Protocol analyzers

xBalancer takes traffic from any network port or aggregated set of network ports and distributes it to two, three, four, or up to eight monitor ports for balancing according to IP address, port, protocol, VLAN, and MAC address, or other parameters.

By enabling already-integrated 1G tools to fill an expanded role, xBalancer helps this Global Cloud Provider handle its increasing traffic volumes without investing right away in new 10G capital equipment. Not only does this minimize CAPEX, it also eliminates the operational expense of implementing the new tools and training users. Best of all, xBalancer dramatically raises the efficiency, security and availability of the network itself by reducing or bypassing IPS failures.

Net Optics xBalancer 2

xBalancer Works with Fiber Taps to Create A Flexible Solution

Yet another factor that makes xBalancer the best fit is that the Search Engine company can now partition and configure it into multiple independent load balancers. The link can be tapped using a Net Optics Fiber Tap, which then sends a copy of the traffic to xBalancer. The xBalancer proceeds to make three copies of it and sends a copy to each of the three load balancers. Thus, if the three original tools were called A, B, and C, we now have a second “A” tool, a second “B” tool, and a second “C” tool. The first load balancer splits traffic between the two A tools, sending half of the traffic to each. The second load balancer splits a second copy of the same traffic to the two B tools, and the third load balancer splits another copy to the two C tools. Importantly, the traffic is distributed in a flow-coherent manner, meaning that packets traveling from one computer to another are always guaranteed to be sent to the exact same tool.

xBalancer Simplifies the Task of Adding Tools

In architecting a solution, the Global Cloud Provider must assume that traffic on the link is going to rise. xBalancer streamlines the addition of more tools as traffic on the link ascends. For example, if the pair of “A” tools can no longer handle all the traffic, a third “A” tool can be added and its load balancer configured to divide the traffic among all three tools.

In addition the ability to upgrade the tool sets independently adds ultimate flexibility. Because each tool set—“As”, “Bs”, and “Cs”—can be upgraded independently, Tool “A” can maintain three tools while B and C still have only two each.

High Availability When Failure Is Not an Option

As a leader in its market, the Global Cloud Provider is under constant competitive pressure. New offerings are being launched by major companies to try to capture this Provider’s customers and ranking. This means that availability must be uncompromising if the Company is to thrive. xBalancer supports High Availability (HA) modes including N+M redundancy and link-state awareness—again, these can be applied independently for A, B, and C tools. For example, two more tools could be added to B and configured as three active tools and one standby; if any of the active “B” tools fails, the traffic going to it can be switched over to the standby tool. Meanwhile a third tool can be added to C and configured link-state aware, so that all three tools are active. If a tool fails, however, the traffic headed towards it can be reallocated across the two remaining tools.

A Superior Solution Delivers Capacity, Competitiveness and Customers

Engineering the Provider’s solution using xBalancer delivers dramatic advantages, which become even more striking when a competing architecture is considered: Imagine a solution in which the outputs of a single load balancer are replicated to an A tool, a B tool, and a C tool. In such a situation, adding another A tool would make the load balancer a three-way design, so a user would have to add an additional B tool and C tool. This solution would only be proposed by vendors whose devices cannot be partitioned into separate independent load balancers.

xBalancer Now Offers Enhanced Efficiency and Management Features

xBalancer’s TapFlow™ filtering technology enables the Provider’s monitoring tools to handle even more traffic, more links, and more protocols. TapFlow filtering sends to each tool only the traffic that addresses its particular purpose—and filters traffic at full 10 Gbps line speeds.

Additionally, xBalancer provides advanced availability features such as link-state awareness and Heartbeat packet assessment to support mission-critical monitoring. Heartbeat packets allow the Provider’s IT team to analyze attached appliances and reallocate traffic. Should one tool fail, traffic is automatically distributed to remaining tools until the failed tool is repaired and back online. This minimizes loss of monitoring capability in most failure scenarios. Optimized debug logging, plus the CLI commands “capture” and “syslog” make managing xBalancer easier and smoother for the team.

Thanks to Net Optics for the article.

5 Steps To Prepare Your Network For Cloud Computing

To the novice IT manager, a shift to cloud computing may appear to offer great relief. No longer will their team have to worry as much about large infrastructure deployments, complex server configurations, and troubleshooting complex delivery on internally-hosted applications. But, diving a little deeper reveals that cloud computing also delivers a host of new challenges.

Through cloud computing, organizations perform tasks or use applications that harness massive third-party computing and processing power via the Internet cloud. This allows them to quickly scale services and applications to meet changing user demand and avoid purchasing network assets for infrequent, intensive computing tasks.

While providing increased IT flexibility and potentially lowering costs, cloud computing shifts IT management priorities from the network core to the WAN/ Internet connection. Cloud computing extends the organization’s network via the Internet, tying into other networks to access services, applications and data. Understanding this shift, IT teams must adequately prepare the network, and adjust management styles to realize the promise of cloud computing.

5 Steps to Prepare Your Network for Cloud Computing 1

Here are 5 key considerations organizations should make when planning, employing, and managing cloud computing applications and services:

1. Conduct Pre-Deployment and Readiness Assessments

Determine existing bandwidth demands per user, per department, and for the organization as a whole. With the service provider’s help, calculate the average bandwidth demand per user for each new service you plan to deploy. This allows the IT staff to appropriately scale the Internet connection and prioritize and shape traffic to meet the bandwidth demands of cloud applications.

2. Shift the Network Management Focus

Cloud computing’s advantage lies in placing the burden of applications and data storage and processing on another network. This shifts management priorities from internal data concerns to external ones. Currently, organizations have larger network pipes and infrastructure at the network core, where the computer processing power is located.

With cloud computing and Software as a Service (SaaS) applications, the importance of large bandwidth capacities shift away from the core to the Internet connection. The shift in focus will significantly impact the decisions you make from whether your monitoring tools adequately track WAN performance to the personnel and resources you devote to managing WAN-related issues.

3. Determine Priorities

With a massive pipeline to the Internet handling online applications and processing, data prioritization becomes critical. Having an individual IP consuming 30 percent of the organization’s bandwidth becomes unworkable. Prioritize cloud and SaaS applications and throttle traffic to make sure bandwidth is appropriately allocated.

4. Consider ISP Redundancy

Thoroughly assess the reliability of your existing Internet Service Provider. When the Internet connection is down or degraded, business productivity will also be impacted. Consider having multiple providers should one have a performance issue.

5. Hold Service Providers Accountable

Today, if a problem occurs within the network core, the engineer can monitor the entire path of network traffic from the client to the server in order to locate the problem source. With service providers controlling the majority of information in cloud computing, it becomes more difficult to monitor, optimize, and troubleshoot connections.

As a result, Service Level Agreements (SLA), take on greater importance in ensuring expected network and internet performance levels. SLAs should outline the delivery of expected Internet service levels and performance obligations service providers must meet and define unacceptable levels of dropped frames and other performance metrics.

An SLA by itself is not enough to guarantee your organization receives the level of service promised. Since it is not in the provider’s interest to inform a client when its quality of service fails, we must rely on an independent view of WAN link connections. Utilize a network analyzer with a WAN probe to verify quality of service and gauge whether the provider is meeting SLA obligations.

Cloud computing is more than the latest IT buzzword; it’s a real way for companies to quickly obtain greater network flexibility, scalability, and computing power for less money. But like most technologies, these services are not without risk and require proper preparation and refocused management efforts to succeed.

Thanks to Network Instruments for the article.

The Net Optics Phantom Virtualization Tap™ captures data passing between virtual machines (VMs) and sends traffic of interest to virtual and physical monitoring tools of choice.

Net Optics and VMware

The Virtual Monitoring Challenge

Enterprises have been utilizing Tap solutions for network traffic access for many years. Traffic capture, analysis, replay, and logging are now part of every well-managed network environment. In recent years, the significant shift to virtualization—with penetration exceeding 50%—is yielding great benefits in efficiency. However, today’s virtualization-based deployments create challenges for network security, compliance, and performance monitoring. This is because Inter-VM traffic is optimized to speed up connections and minimize network utilization. This imposes invisibility on physical tools unable to extend easily into the new environments. Costly new virtualization-specific tools plus training can affect the economic benefits and cost-savings of virtualizing. Currently, many tools suffer from limited throughput, hypervisor incompatibility, and excessive resource utilization.

Virtual infrastructures use hypervisor technology to deploy multiple computing environments on a single physical (hardware) server, or across a group of physical servers. Traditional Taps cannot see the traffic between the VMs that reside on the same hypervisor, nor can they “follow” specific VMs as automation moves them from one hypervisor to another to optimize efficiency and availability.

Visibility is further reduced by the complexity of blade servers: with each blade running multiple VMs on a hypervisor. Traffic between the blades running on a dedicated backplane is also invisible to the physical network and its attached tools.

The Phantom Virtualization Tap Solution

The Phantom suite of software products provides 100% visibility of virtual network traffic, including the unseen inter-VM traffic on ESXi Stack. This milestone solution has now expanded to support the industry’s leading hypervisor. The Phantom Monitor installs in the hypervisor kernel above the virtual switch. It is a software implementation of a switching mechanism that manages communications between virtual network devices and works identically to the physical switch. The Phantom Monitor can replicate all traffic within the virtual switch, apply smart TapFlow™ filtering, and send traffic of interest to any monitoring tools of choice. It can even pass the replicated traffic to a physical port so physical tools can monitor the data. Virtual traffic is bridged to the physical world in an encapsulated tunnel that can be terminated by a Net Optics xFilter™, Phantom HD™ or at any capable termination point of your choosing.

Solution Highlights

  • 100 percent visibility of traffic between Virtual Machines (VMs) and inter-blade visibility
  • Installs in hypervisor kernel for full traffic visibility
  • Enables visibility and control of network traffic in VMware vSphere ESX/ESXi Server 4.X/5.X
  • Generates Layer 2-4 statistics (packet count, utilization, etc.)
  • TapFlow™ multi-layer L2-4 filtering engine
  • Extends monitoring and access into the Inter-VM networking layer (East-West Traffic)
  • Applies existing physical monitoring tools, processes, and procedures to the virtual network
  • No interference with the data stream or VMs (no agents or services on VMs)
  • No modifications needed in VMs
  • Replicates Inter-VM traffic to virtual and physical monitoring tools of choice
  • Sends mirrored traffic out physical NICs in encapsulated tunnels

Net Optics Phantom Virtualization Tap

Net Optics and VMware Team Up to Deliver Full Visibility, Automation, Flexibility and Scalability for Comprehensive Monitoring for Virtual Environments.

Unique Capabilities

Net Optics Phantom GraphThe Phantom Virtualization Tap provides these unique capabilities to your VMware virtual computing environment:

  • A solution that performs network monitoring at the hypervisor kernel level providing full view of the traffic flowing between VMs, regardless of their current physical locations
  • Implemented at the kernel; delivers the ability to differentiate between specific VM instances in replicated environments, and keep monitoring and logging the VMs even as they are moved between hypervisors (different physical servers or locations)
  • The industry’s only integrated solution for converged (virtual and physical) environments. Fully hypervisor-agnostic and virtual switch-agnostic, the Phantom Virtualization Tap works seamlessly with Net Optics’ Director series of data monitoring switches
  • Net Optics Indigo Pro™—a unified network management tool —provides an easy-to-use, Web-based GUI interface

Thanks to Net Optics for the article.

Simply Explained: Software Defined Networks

cropped-telnet-networks-new-red-low-res1While you may not be that familiar with Software Defined Networking (SDN), it may be the next “big thing.” As with other popular technologies like cloud and big data, there is no clear consensus on the exact definition of SDN. Also like these technologies, SDN initiatives are likely to be pushed by IT folks outside the network team, and can greatly affect network visibility.

While an SDN future may seem a few light years away, given the clout of vendors and users pushing these technologies, including Microsoft, Google, VMware and Verizon, it’s important to be aware of the concepts and terminology behind the technology.

What is SDN?

As cloud has abstracted storage and virtualization separates applications from servers, SDN attempts to separate the system that makes decisions about where traffic is sent (the control plane) from the underlying system that forwards traffic to the selected destination (the data plane). So, rather than packet route decisions being determined by local infrastructure one hop at a time, routing decisions are made by a centralized controller server.

Benefits

Operating from a holistic network-wide approach, the benefits to SDN are threefold:

Performance and traffic flow become more efficient as decisions are made at a network-wide level by the controller, rather than at the device level where traffic routing decisions are based only upon the links between the forwarding device and adjacent devices.
Policy and configuration management can be done at a centralized level rather than device-by-device.

Network devices, such as switches, can be simplified and focused purely on packet forwarding, rather than having them carry a heavy, complex, and expensive feature set

Key Concepts

Controller: Centralized device that communicates with all the network devices in a domain, determines the topology, and programs network connectivity paths from a centralized view. The network is programmed and managed at the network level rather than through individual devices.

Switching: In SDN environments, hardware and software switches forward traffic as dictated by the controller. The importance and capabilities of software switches will increase within SDN. Hardware switches will likely be dedicated purely to forwarding large amounts of traffic in SDN environments.

Virtual Overlay Networks: Overlays are used to create virtual networks that are logically separated from each other while sharing the same underlying physical network. Packets are encapsulated inside another and sent to a tunnel endpoint where they are de-capsulated. The original packets are then delivered to the destination.

Net-Net for Network Teams

While we’re still several years away from full-fledged SDN, conversations will likely start in 2014 and potentially outside of the network team’s view. Try to keep up with SDN plans in your organization, so that you can encourage monitoring visibility. Be a part of the initial design conversation rather than inherit any issues post-implementation. Additionally, while networks may become more robust and automated through SDN, there will always be a need for in-depth packet capture and analysis. This will help to assess long-term performance and perform root cause analysis to pinpoint the source of any service interruptions.

SDN Resources

For more detail on SDN concepts, technology, and deployments, check out the following link:

Network World- In-Depth SDN guide

Thanks to Networks Instruments for the article.

Allstream connects 3,000 buildings to IP fibre network

Telnet NetworksCanadian enterprise telecoms provider Allstream, part of MTS Allstream, has announced that it has connected more than 3,000 buildings to its 30,000km-plus nationwide fibre-optic IP network. The milestone represents a 50% increase in on-net IP fibre-fed buildings compared to three years ago. In 2010 Allstream initiated a targeted multi-year investment programme to increase its profitability by expanding the IP fibre network to nearby multi-tenant buildings that could be connected at a low cost. Allstream has now exceeded its plans, with fibre-fed buildings reaching over 3,000, up from 2,000 buildings connected in 2010. ‘In 2014, we will continue to make strategic investments to expand Allstream’s network reach while we sign up more customers in our existing buildings,’ a press release from the operator read. Allstream’s fibre-optic network offers IP connectivity, digital switching, Ethernet-based services and advanced internet security technologies and services, including right-of-way access in major Canadian city centres and nine cross-border connections to the United States.

Thanks to TeleGeography for the article.