Impact of UC as a Service in the Business World

Unified-Communications-ICONUnified Communications (UC) is the integration of real-time and non-real time communication to provide instant and on-demand services to customers. During the last few years, UC has seen enormous growth due to the increasing use of UC by businesses around the world. A report by Frost & Sullivan (News – Alert) shows that UC is likely to grow at an annual rate of 32.7 percent per year through 2017 in North America alone. It is being used for many applications such as hosted services, voice calls and other applications.

In the light of this growth, some companies are evaluating whether they need UC for their business at all. The answer is a resounding yes, because UC brings many benefits to companies. Firstly, UC streamlines all communication channels within an organization. Without UC, businesses have to employ people around the clock to interact with customers over the phone or face to face. This is time-consuming and effort-centric and in most companies, it is impossible to implement. The difference in time zones also further complicates the communication process. This is why it is important to have a unified communication model where all the external communications with clients is streamlined.

Statistics by show that 73 percent of business related calls go to voice mail. If the mailbox is full, then it is the business that is affected by the inefficiency because customers will tend to move on to competitors. To avoid such shifts in customer loyalty and to strengthen the relationship with customers, it is important for businesses to have a sound UC model.

It is equally important to choose the right service from a service provider who is willing to offer UC as a service. Today, many companies in the UC industry offer only the hardware or the infrastructure and not UC as a service. Going forward, every business is likely to use a combination of UC models because of the flexibility and convenience that comes with it. For this reason, businesses should enter into agreements with service providers who offer it as a service.

Thanks to Unified Communications for the article.

Five Steps to Building Total Visibility and Control of your Cloud Infrastructure using StableNet®

Modern enterprise applications are engineered for agility and are heavily virtualized for frequent deployment over scalable IT infrastructure environments. The benefits of virtualized public\private cloud environments include flexibility, efficiency, and agile business enablement. Multiple factors such as varying workload requirements, or ‘just-in‐time’ provisioning require an accurate and scalable muti-functional management suite to mitigate the risk of compromising the environment. Infosim’s StableNet® unified management system provides the total visibility and control required for your cloud hosting infrastructure environment.

Step 1 – WAN Access to your Cloud Environment

The Wide Area Network (WAN) is a key component in the delivery path of your cloud services. It is essential that the WAN routing devices that allow your public and private cloud‐based traffic to flow in\out of your hosting Infosim Public Private Cloudenvironment have the appropriate management array of functionality in place to deliver the service visibility and application performance needed for measuring the service availability and ultimately the customer experience. Optimization of the WAN is a necessity for application prioritization and acceleration delivery; therefore, maximum visibility and control of the WAN access points to and from your cloud environment are absolutely paramount for maintaining high service availability.

StableNet® is a unified management solution and is therefore ideally suited for cloud infrastructure management. For the WAN element of your cloud hosting environment StableNet® provides an array of multi‐functional capabilities that deliver the complete wrap required to monitor, police, and control your environment proactively. For the WAN infrastructure this includes:

Infosim Vendor Management Systems

  • Inventory Management (e.g. Device types, cards, serial numbers etc.)
  • Topological Management (e.g. Network Topology Visualization)
  • Configuration Management (e.g. Config Backup\Restoration)
  • Policy Management (e.g. Configuration Policy & Governance Control)
  • Change Control (e.g. Configuration Lock-­‐Down Change Control)
  • Performance Management (e.g. Bandwidth, QOS, System, Interface Performance with Threshold Management)
  • Fault Management with RCA (e.g. Fault Correlation Management with unique Root‐Cause‐Analysis)
  • Service Visualization (e.g. Visualize your entire cloud service environment)

Having this level of management control is required for a truly complete proactive management service for your private and public cloud WAN access. With traditional or legacy element management systems, costs can escalate when having to procure and support this number of functional management systems, which ultimately increases in CAPEX and OPEX.

However, StableNet® is a unified management system with a wealth of integrated functionality that therefore makes it extremely cost effective, driving down CAPEX and OPEX requirements while differentiating your management capabilities and keeping your operating costs to a minimum.

Step 2 – LAN Infrastructure Interconnectivity

The Local Area Network (LAN) is a critical element for network traffic delivery within the cloud hosting or datacenter. Your entire WAN access and cloud server hosting platform environment will have been interconnected to Infosim Public Private Cloud 2your physical and virtual LAN switching network infrastructure. The LAN switching element is the backbone of your cloud hosting environment so any problems within this area will seriously affect your customers and partners IT services. Implementation of a management system that has a complete proactive monitoring solution will reduce the risk of potentially damaging service outages.

As stated in the previous section, StableNet® is a unified management solution and is therefore ideally suited for cloud infrastructure management. For the LAN element of your cloud hosting environment StableNet® provides an array of multi‐functional capabilities that provide the complete wrap required to monitor, police, and control your LAN environment proactively as well as visualize the entire hosting environment from a single view detailing all LAN interconnection to both the WAN access points and the server hosting systems. The LAN infrastructure includes:

Infosim Vendor Management Systems 2

  • Inventory Management (e.g. Device types, cards, serial Nos etc.)
  • Topological Management (e.g. Network Topology Visualization)
  • Configuration Management (e.g. Config Backup\Restoration)
  • Policy Management (e.g. Configuration Policy & Governance Control)
  • Change Control (e.g. Configuration Lock-­‐Down Change Control)
  • Performance Management (e.g. Bandwidth, QOS, System, Interface Performance, with Threshold Management)
  • Fault Management with RCA (e.g. Fault Correlation Management with unique Root‐Cause‐Analysis)
  • Service Visualization (e.g. Visualize your entire cloud service environment)
  • 802.1q Trunk Monitoring (e.g. Monitoring of critical LAN trunk interconnects)
  • SPAN\RSPAN Monitoring (e.g. Monitoring of SPAN\RSPAN)

Step 3 – The Secure Infrastructure Environment

The security infrastructure surrounding the hosted WAN and LAN environment will typically consist of firewalls, intrusion detection (IDS/IPS), logging analytical systems (SIEM\SEM, etc.), and appliances. While many of these systems have specific vendor element managers for the configuration, analytics, and reporting, they will also support standard SNMP (Simple Network Management Protocol), system event and alarm logging functionalities, among other capabilities. StableNet® is vendor agnostic and supports many of these appliance types (F5, Cisco, Juniper, Checkpoint, etc.). It is therefore very important these devices are managed in a way whereby the physical and virtual elements are visualized within the service topology, and the performance and change management controls are policed around threshold, event and alarm management. Management and policing of specific configuration criteria, for example Rule-Set policies, are crucial for maintaining proactive service availability. StableNet® functionality also extends to extensive logging and analytical reporting capabilities which, can be deployed in high‐availability to maximize resilience and redundancy.

Step 4 – The Physical & Virtual Hosting Platforms

The physical and virtual platforms are controlled and built via the automation & provisioning tools. Management of the hosting systems, both physical and virtual, their associated operating systems (e.g. Windows, Linux, StableNet Integrated Data BusSolaris etc.), and physical and virtual network interconnections into the LAN hosting infrastructure will also need to be configured for performance, threshold event and alarm management, with specific service KPIs and SLAs. Visualization of both the physical and virtual environments together with the real‐time performance is critical for proactive management. Therefore, speed of onboarding the management wrap around the virtualized platforms needs to be engaged during the automated build process.

StableNet’s® unified management system provides the complete array of functional capabilities required for monitoring and managing the hosting platforms within a cloud environment. More importantly it can also be integrated with a plethora of provisioning and automation tools that make it very attractive in this space due to enriched capabilities. The StableNet® API provides seamless integration to and from other functional tools so as systems are built the provisioning of the full proactive management capability is provisioned to your service assurance criteria.

Step 5 – Application Performance Monitoring

StableNet APM Pie ChartThe performance of the applications your customers are using is crucial to your customer experience, as it’s the experience of the service that is paramount to the success of your business. Being able to measure the performance of applications, used by both public and private cloud customers, and provide monitoring dashboards and reporting of the service performance is a standard requirement for any cloud service provider.

APM products in the market place today have a wealth of functionality that brings additional complexities and ultimately results in a costly operating model. StableNet’s® APM functionality is already integrated with other enriched capabilities that make troubleshooting and identification of root cause so much easier from a single product.

Many applications today are browser, or Web‐based, applications. StableNet® has the ability to configure pre-defined scripts that can interact with a Web‐based application and perform a suite of metric tests that are collected and analyzed for specific performance‐related, and problem determination monitoring and reporting requirements.

  • Web‐based Application MonitoringStableNet Service Experience
    • URL\Web Page Availability and Response Monitoring
    • Historical Performance
    • Performance Trending
    • Performance Analysis

Your business customers rely on Web‐based applications to generate sales, present market research, or provide services. The complexity of these applications means they are susceptible to performance issues and failures; therefore, it is imperative from the user experience that the application functionality be monitored in a way where the deployment of transactional script-­‐based monitoring enables:

  • Performance Monitoring of each Application Transaction Step
  • Identify Application Performance Congestion and failures
  • Optimize User Experience
  • Root-­‐Cause-Analysis through Event & Alarm Notification Management

Deployment of APM will drive down your MTTR (Mean-Time-to-Repair) and provide you the comfort of knowing your customers are experiencing great service application performance. It will also provide you with the confidence to offer customer application performance-based SLAs (Service Level Agreements) that will drive existing customer sales and increase your customer base.

StableNet Sys LogConclusion

There are many aspects to managing a cloud hosting environment. Stacking up individual element managers to perform specific functions for your OSS (Operational Support Systems) requirements is both costly in terms of capital expenditure outlay; for example, platform, software, annual software support for each element management system, and operational expenditure in terms of increased headcount and tool specialization for each system. The strategy for the 21st century is to deploy unified management solutions, whereby you maximize the functionality capabilities from a single system, thus driving down both your capital and operational expense. More importantly though is the benefit it provides. A unified management platform is a suite of multi-functional capabilities in a single product that therefore has seamless integration and cross-correlation built-in, enabling what we class as True-Visualization of your entire End-to-End cloud hosting environment.

Thanks to Infosim for the article.

xBalancer Helps a Global Cloud Provider Maintain Uninterrupted Security and Streamline Traffic Monitoring While Scaling to 10G

Company relies on xBalancer to load balance and prevent overburdening of monitoring tools, while enabling tools to be added quickly and cost-efficiently in an intensive, high-demand environment

Net Optics xBalancerGlobal Cloud Provider’s Upgrade Triggers the Urgent Need for Load-Balancing Net Optics xBalancer Customer Testimonial

The momentum toward 10G network links brings major challenges; in particular, monitoring critical traffic thoroughly in a high-speed, high-volume environment and avoiding the risk of overburdening vital tools. Companies prevent this overburdening by distributing the traffic load to multiple tools.

The Security Group for this well-known Global Cloud Provider (“The Provider”) uses three different types of out-of-band security tools, such as Intrusion Detection Systems (IDSs), to monitor a one-Gigabit Internet link coming into the company. (The application is for the corporate network and not any customer-facing service.) The security team employs three tool types from three different vendors, because each tool contributes its own unique strengths and enables more thorough security coverage.

A Sharp Traffic Increase On the Link Brings Monitoring Concerns

The Company was understandably worried that its current tools wouldn’t be able to handle the increased traffic caused by upgrading from 1G to 10G. Such intensive pressures threaten to overwhelm and limit the effectiveness of the very devices that carry out the monitoring. Unable to keep up with increasing loads, these overburdened tools can put service-level agreements at risk and expose the network to threats. To deal with this problem, a company may have to invest in costly new tools that are engineered for the 10G environment. They were facing another challenge. While they upgrade the network to 10G and using 2 x 1G devices, each device was suffering from Micro bursts. So distributing the traffic with xBalancer helped to face this problem as well.

Fortunately, during this initial phase of the upgrade, the Company’s security team determined that two tools have enough bandwidth to handle anticipated traffic. Since the link will have less than 2 Gbps for a while, the company wishes to load balance traffic to two tools, and to perform that three times—once for each of the different tools.

The advantages of replicating 1G tools already at work in the IT environment are evident:

This ingenious, low-cost, efficient approach optimizes existing resources by allowing multiple 1G tools to share the load caused by processing high traffic volumes, while leveraging existing processes and operator training.

xBalancer Is the Ideal Solution for Distributing Traffic In the 10G Arena

The versatile Net Optics xBalancer™ is purpose-built to share security devices across multiple links, offering 24 SFP+ ports and integrated data rate conversion in a 1U formfactor. It aggregates traffic from multiple 1G and 10G links and distributes it to 1G or 10G tools. xBalancer enables two or more appliances to be deployed in parallel with traffic balanced between them—from 10G links to multiple 1G tools—in either inline or out-ofband topologies.

With xBalancer, even the heaviest traffic loads sail through to IPSs and traffic recorders in the 10G data center. xBalancer’s innovative engineering enables it to distribute traffic to all manner of monitoring tools, including:

  • IPSs
  • Firewalls
  • Traffic recorders
  • Web accelerators
  • Application Performance Management devices
  • Intrusion Detection Systems
  • Protocol analyzers

xBalancer takes traffic from any network port or aggregated set of network ports and distributes it to two, three, four, or up to eight monitor ports for balancing according to IP address, port, protocol, VLAN, and MAC address, or other parameters.

By enabling already-integrated 1G tools to fill an expanded role, xBalancer helps this Global Cloud Provider handle its increasing traffic volumes without investing right away in new 10G capital equipment. Not only does this minimize CAPEX, it also eliminates the operational expense of implementing the new tools and training users. Best of all, xBalancer dramatically raises the efficiency, security and availability of the network itself by reducing or bypassing IPS failures.

Net Optics xBalancer 2

xBalancer Works with Fiber Taps to Create A Flexible Solution

Yet another factor that makes xBalancer the best fit is that the Search Engine company can now partition and configure it into multiple independent load balancers. The link can be tapped using a Net Optics Fiber Tap, which then sends a copy of the traffic to xBalancer. The xBalancer proceeds to make three copies of it and sends a copy to each of the three load balancers. Thus, if the three original tools were called A, B, and C, we now have a second “A” tool, a second “B” tool, and a second “C” tool. The first load balancer splits traffic between the two A tools, sending half of the traffic to each. The second load balancer splits a second copy of the same traffic to the two B tools, and the third load balancer splits another copy to the two C tools. Importantly, the traffic is distributed in a flow-coherent manner, meaning that packets traveling from one computer to another are always guaranteed to be sent to the exact same tool.

xBalancer Simplifies the Task of Adding Tools

In architecting a solution, the Global Cloud Provider must assume that traffic on the link is going to rise. xBalancer streamlines the addition of more tools as traffic on the link ascends. For example, if the pair of “A” tools can no longer handle all the traffic, a third “A” tool can be added and its load balancer configured to divide the traffic among all three tools.

In addition the ability to upgrade the tool sets independently adds ultimate flexibility. Because each tool set—“As”, “Bs”, and “Cs”—can be upgraded independently, Tool “A” can maintain three tools while B and C still have only two each.

High Availability When Failure Is Not an Option

As a leader in its market, the Global Cloud Provider is under constant competitive pressure. New offerings are being launched by major companies to try to capture this Provider’s customers and ranking. This means that availability must be uncompromising if the Company is to thrive. xBalancer supports High Availability (HA) modes including N+M redundancy and link-state awareness—again, these can be applied independently for A, B, and C tools. For example, two more tools could be added to B and configured as three active tools and one standby; if any of the active “B” tools fails, the traffic going to it can be switched over to the standby tool. Meanwhile a third tool can be added to C and configured link-state aware, so that all three tools are active. If a tool fails, however, the traffic headed towards it can be reallocated across the two remaining tools.

A Superior Solution Delivers Capacity, Competitiveness and Customers

Engineering the Provider’s solution using xBalancer delivers dramatic advantages, which become even more striking when a competing architecture is considered: Imagine a solution in which the outputs of a single load balancer are replicated to an A tool, a B tool, and a C tool. In such a situation, adding another A tool would make the load balancer a three-way design, so a user would have to add an additional B tool and C tool. This solution would only be proposed by vendors whose devices cannot be partitioned into separate independent load balancers.

xBalancer Now Offers Enhanced Efficiency and Management Features

xBalancer’s TapFlow™ filtering technology enables the Provider’s monitoring tools to handle even more traffic, more links, and more protocols. TapFlow filtering sends to each tool only the traffic that addresses its particular purpose—and filters traffic at full 10 Gbps line speeds.

Additionally, xBalancer provides advanced availability features such as link-state awareness and Heartbeat packet assessment to support mission-critical monitoring. Heartbeat packets allow the Provider’s IT team to analyze attached appliances and reallocate traffic. Should one tool fail, traffic is automatically distributed to remaining tools until the failed tool is repaired and back online. This minimizes loss of monitoring capability in most failure scenarios. Optimized debug logging, plus the CLI commands “capture” and “syslog” make managing xBalancer easier and smoother for the team.

Thanks to Net Optics for the article.

5 Steps To Prepare Your Network For Cloud Computing

To the novice IT manager, a shift to cloud computing may appear to offer great relief. No longer will their team have to worry as much about large infrastructure deployments, complex server configurations, and troubleshooting complex delivery on internally-hosted applications. But, diving a little deeper reveals that cloud computing also delivers a host of new challenges.

Through cloud computing, organizations perform tasks or use applications that harness massive third-party computing and processing power via the Internet cloud. This allows them to quickly scale services and applications to meet changing user demand and avoid purchasing network assets for infrequent, intensive computing tasks.

While providing increased IT flexibility and potentially lowering costs, cloud computing shifts IT management priorities from the network core to the WAN/ Internet connection. Cloud computing extends the organization’s network via the Internet, tying into other networks to access services, applications and data. Understanding this shift, IT teams must adequately prepare the network, and adjust management styles to realize the promise of cloud computing.

5 Steps to Prepare Your Network for Cloud Computing 1

Here are 5 key considerations organizations should make when planning, employing, and managing cloud computing applications and services:

1. Conduct Pre-Deployment and Readiness Assessments

Determine existing bandwidth demands per user, per department, and for the organization as a whole. With the service provider’s help, calculate the average bandwidth demand per user for each new service you plan to deploy. This allows the IT staff to appropriately scale the Internet connection and prioritize and shape traffic to meet the bandwidth demands of cloud applications.

2. Shift the Network Management Focus

Cloud computing’s advantage lies in placing the burden of applications and data storage and processing on another network. This shifts management priorities from internal data concerns to external ones. Currently, organizations have larger network pipes and infrastructure at the network core, where the computer processing power is located.

With cloud computing and Software as a Service (SaaS) applications, the importance of large bandwidth capacities shift away from the core to the Internet connection. The shift in focus will significantly impact the decisions you make from whether your monitoring tools adequately track WAN performance to the personnel and resources you devote to managing WAN-related issues.

3. Determine Priorities

With a massive pipeline to the Internet handling online applications and processing, data prioritization becomes critical. Having an individual IP consuming 30 percent of the organization’s bandwidth becomes unworkable. Prioritize cloud and SaaS applications and throttle traffic to make sure bandwidth is appropriately allocated.

4. Consider ISP Redundancy

Thoroughly assess the reliability of your existing Internet Service Provider. When the Internet connection is down or degraded, business productivity will also be impacted. Consider having multiple providers should one have a performance issue.

5. Hold Service Providers Accountable

Today, if a problem occurs within the network core, the engineer can monitor the entire path of network traffic from the client to the server in order to locate the problem source. With service providers controlling the majority of information in cloud computing, it becomes more difficult to monitor, optimize, and troubleshoot connections.

As a result, Service Level Agreements (SLA), take on greater importance in ensuring expected network and internet performance levels. SLAs should outline the delivery of expected Internet service levels and performance obligations service providers must meet and define unacceptable levels of dropped frames and other performance metrics.

An SLA by itself is not enough to guarantee your organization receives the level of service promised. Since it is not in the provider’s interest to inform a client when its quality of service fails, we must rely on an independent view of WAN link connections. Utilize a network analyzer with a WAN probe to verify quality of service and gauge whether the provider is meeting SLA obligations.

Cloud computing is more than the latest IT buzzword; it’s a real way for companies to quickly obtain greater network flexibility, scalability, and computing power for less money. But like most technologies, these services are not without risk and require proper preparation and refocused management efforts to succeed.

Thanks to Network Instruments for the article.

The Net Optics Phantom Virtualization Tap™ captures data passing between virtual machines (VMs) and sends traffic of interest to virtual and physical monitoring tools of choice.

Net Optics and VMware

The Virtual Monitoring Challenge

Enterprises have been utilizing Tap solutions for network traffic access for many years. Traffic capture, analysis, replay, and logging are now part of every well-managed network environment. In recent years, the significant shift to virtualization—with penetration exceeding 50%—is yielding great benefits in efficiency. However, today’s virtualization-based deployments create challenges for network security, compliance, and performance monitoring. This is because Inter-VM traffic is optimized to speed up connections and minimize network utilization. This imposes invisibility on physical tools unable to extend easily into the new environments. Costly new virtualization-specific tools plus training can affect the economic benefits and cost-savings of virtualizing. Currently, many tools suffer from limited throughput, hypervisor incompatibility, and excessive resource utilization.

Virtual infrastructures use hypervisor technology to deploy multiple computing environments on a single physical (hardware) server, or across a group of physical servers. Traditional Taps cannot see the traffic between the VMs that reside on the same hypervisor, nor can they “follow” specific VMs as automation moves them from one hypervisor to another to optimize efficiency and availability.

Visibility is further reduced by the complexity of blade servers: with each blade running multiple VMs on a hypervisor. Traffic between the blades running on a dedicated backplane is also invisible to the physical network and its attached tools.

The Phantom Virtualization Tap Solution

The Phantom suite of software products provides 100% visibility of virtual network traffic, including the unseen inter-VM traffic on ESXi Stack. This milestone solution has now expanded to support the industry’s leading hypervisor. The Phantom Monitor installs in the hypervisor kernel above the virtual switch. It is a software implementation of a switching mechanism that manages communications between virtual network devices and works identically to the physical switch. The Phantom Monitor can replicate all traffic within the virtual switch, apply smart TapFlow™ filtering, and send traffic of interest to any monitoring tools of choice. It can even pass the replicated traffic to a physical port so physical tools can monitor the data. Virtual traffic is bridged to the physical world in an encapsulated tunnel that can be terminated by a Net Optics xFilter™, Phantom HD™ or at any capable termination point of your choosing.

Solution Highlights

  • 100 percent visibility of traffic between Virtual Machines (VMs) and inter-blade visibility
  • Installs in hypervisor kernel for full traffic visibility
  • Enables visibility and control of network traffic in VMware vSphere ESX/ESXi Server 4.X/5.X
  • Generates Layer 2-4 statistics (packet count, utilization, etc.)
  • TapFlow™ multi-layer L2-4 filtering engine
  • Extends monitoring and access into the Inter-VM networking layer (East-West Traffic)
  • Applies existing physical monitoring tools, processes, and procedures to the virtual network
  • No interference with the data stream or VMs (no agents or services on VMs)
  • No modifications needed in VMs
  • Replicates Inter-VM traffic to virtual and physical monitoring tools of choice
  • Sends mirrored traffic out physical NICs in encapsulated tunnels

Net Optics Phantom Virtualization Tap

Net Optics and VMware Team Up to Deliver Full Visibility, Automation, Flexibility and Scalability for Comprehensive Monitoring for Virtual Environments.

Unique Capabilities

Net Optics Phantom GraphThe Phantom Virtualization Tap provides these unique capabilities to your VMware virtual computing environment:

  • A solution that performs network monitoring at the hypervisor kernel level providing full view of the traffic flowing between VMs, regardless of their current physical locations
  • Implemented at the kernel; delivers the ability to differentiate between specific VM instances in replicated environments, and keep monitoring and logging the VMs even as they are moved between hypervisors (different physical servers or locations)
  • The industry’s only integrated solution for converged (virtual and physical) environments. Fully hypervisor-agnostic and virtual switch-agnostic, the Phantom Virtualization Tap works seamlessly with Net Optics’ Director series of data monitoring switches
  • Net Optics Indigo Pro™—a unified network management tool —provides an easy-to-use, Web-based GUI interface

Thanks to Net Optics for the article.

Simply Explained: Software Defined Networks

cropped-telnet-networks-new-red-low-res1While you may not be that familiar with Software Defined Networking (SDN), it may be the next “big thing.” As with other popular technologies like cloud and big data, there is no clear consensus on the exact definition of SDN. Also like these technologies, SDN initiatives are likely to be pushed by IT folks outside the network team, and can greatly affect network visibility.

While an SDN future may seem a few light years away, given the clout of vendors and users pushing these technologies, including Microsoft, Google, VMware and Verizon, it’s important to be aware of the concepts and terminology behind the technology.

What is SDN?

As cloud has abstracted storage and virtualization separates applications from servers, SDN attempts to separate the system that makes decisions about where traffic is sent (the control plane) from the underlying system that forwards traffic to the selected destination (the data plane). So, rather than packet route decisions being determined by local infrastructure one hop at a time, routing decisions are made by a centralized controller server.


Operating from a holistic network-wide approach, the benefits to SDN are threefold:

Performance and traffic flow become more efficient as decisions are made at a network-wide level by the controller, rather than at the device level where traffic routing decisions are based only upon the links between the forwarding device and adjacent devices.
Policy and configuration management can be done at a centralized level rather than device-by-device.

Network devices, such as switches, can be simplified and focused purely on packet forwarding, rather than having them carry a heavy, complex, and expensive feature set

Key Concepts

Controller: Centralized device that communicates with all the network devices in a domain, determines the topology, and programs network connectivity paths from a centralized view. The network is programmed and managed at the network level rather than through individual devices.

Switching: In SDN environments, hardware and software switches forward traffic as dictated by the controller. The importance and capabilities of software switches will increase within SDN. Hardware switches will likely be dedicated purely to forwarding large amounts of traffic in SDN environments.

Virtual Overlay Networks: Overlays are used to create virtual networks that are logically separated from each other while sharing the same underlying physical network. Packets are encapsulated inside another and sent to a tunnel endpoint where they are de-capsulated. The original packets are then delivered to the destination.

Net-Net for Network Teams

While we’re still several years away from full-fledged SDN, conversations will likely start in 2014 and potentially outside of the network team’s view. Try to keep up with SDN plans in your organization, so that you can encourage monitoring visibility. Be a part of the initial design conversation rather than inherit any issues post-implementation. Additionally, while networks may become more robust and automated through SDN, there will always be a need for in-depth packet capture and analysis. This will help to assess long-term performance and perform root cause analysis to pinpoint the source of any service interruptions.

SDN Resources

For more detail on SDN concepts, technology, and deployments, check out the following link:

Network World- In-Depth SDN guide

Thanks to Networks Instruments for the article.

Allstream connects 3,000 buildings to IP fibre network

Telnet NetworksCanadian enterprise telecoms provider Allstream, part of MTS Allstream, has announced that it has connected more than 3,000 buildings to its 30,000km-plus nationwide fibre-optic IP network. The milestone represents a 50% increase in on-net IP fibre-fed buildings compared to three years ago. In 2010 Allstream initiated a targeted multi-year investment programme to increase its profitability by expanding the IP fibre network to nearby multi-tenant buildings that could be connected at a low cost. Allstream has now exceeded its plans, with fibre-fed buildings reaching over 3,000, up from 2,000 buildings connected in 2010. ‘In 2014, we will continue to make strategic investments to expand Allstream’s network reach while we sign up more customers in our existing buildings,’ a press release from the operator read. Allstream’s fibre-optic network offers IP connectivity, digital switching, Ethernet-based services and advanced internet security technologies and services, including right-of-way access in major Canadian city centres and nine cross-border connections to the United States.

Thanks to TeleGeography for the article.

Implementing Unified Communications

While UC enhances business processes, it can overwhelm networks when application performance isn’t closely monitored. Any performance delay will be immediately noticeable to the user.


Organizations are always looking for ways to cut expenses. This is particularly true in the current economic climate. Unified communications (UC) represent an appealing alternative to traditional communication processes due to the potential savings once they are implemented. As more companies turn to these alternatives, they also have the potential to struggle with the implementation process and cause headaches for both users and IT staffs. As a result, it is important to develop a basic knowledge of the technology and understand what needs to occur before deployment and what to do once it has been implemented.


UC applications are often programs involving a combination of network-driven communication tools including Voice over IP (VoIP), Web conferencing, online ollaboration, instant messaging (IM), e-mail, voicemail, and videoconferencing.

While UC enhances business processes, it can overwhelm networks when application performance isn’t closely monitored. Because the success or failure of any UC initiative hangs on the user experience, here are best practices for avoiding performance problems.

Network Instruments UC


Before embarking on any UC initiative, it’s important to understand how these applications differ from traditional applications. First, UC bandwidth consumption ranges greatly with the shift to these kinds of applications. For example, if a user checks their e-mail, only a small amount of data is sent over the network and the connection remains idle while the e-mail is read.

However, most UC tools, like web conferencing, are the opposite, requiring high and consistent network bandwidth to maintain performance. As a result, the network engineer must be sure that unexpected spikes from other applications don’t impact the user experience. Any performance delay will be immediately noticeable to the user, whereas e-mail delays are less noticeable.


When implementing a new communications technology such as VoIP, many network managers take one of two approaches. They either install new technologies and address performance problems as they arise, or upgrade bandwidth capacity in anticipation of increased need.

Because of the challenging nature of UC, site surveys and understanding your basic network environment is vital to ensuring a successful roll out. Conducting a site survey before installing UC services can identify and eliminate many performance problems before deployment. Proper predeployment testing also allows IT staff to understand overall bandwidth demand and application performance and establish benchmarks for acceptable network performance. This knowledge is critical for determining how the network will handle new UC traffic and identifying any changes needed to effectively support communications.

An analysis tool that tracks, stores, and analyzes long-term activity will define what is considered normal for a particular environment. The insight into network and application performance gained from the initial survey site and continual monitoring of the added UC traffic also helps in intelligently configuring alarms to alert staff when application performance deviates from the norm.


UC can be particularly sensitive to network performance, especially to delay and jitter. When implementing any communications application, ensuring bandwidth availability through Quality of Service (QoS) is imperative.

Failing to implement QoS opens up the possibility of interference from other applications on the network, this is known as contention. Contention leads to common performance problems including jitter and packet loss. What IT teams and engineers need to realize is that throwing additional bandwidth at the problem does not solve the issue. Even a network with large bandwidth capacity can have poor call quality due to network contention. With QoS monitoring tools, IT engineers are able to set and outline limits to give individual applications like VoIP the highest precedence available, thus allowing the application to function smoothly by providing the appropriate amount of bandwidth needed.

When monitoring QoS issues, it is also essential for IT engineers to consider the technology as a whole, rather than concentrate on one particular aspect of it. For instance, in the case of VoIP, they will need to set precedence appropriately for all connections composing the call, including set-up and tear-down. A common mistake is setting highest precedence for the conversation and neglecting the other components leading to poor call quality.


While UC applications can boost company performance and lower costs, unprepared IT staffs may not consider how these applications increase the opportunities for sharing information. These applications open up holes through which users can accidentally or maliciously share private data and expose sensitive information. These new holes can also be points of attack if not properly secured.

For example, hackers or users can use tools to capture and play back VoIP conversations. Higher-end VoIP systems may offer ways to encrupt data, but lower-end products often do not. Managers will want to consider this before purchasing a solution. In addition, VoIP traffic is usually most vulnerable on the LAN since Internet WAN traffic is typically routed through VPN’s.

The point is to make sure every communication channel is properly secured and monitored. Acceptable Use Policies should also forbid or provide protocols for the proper sharing of sensitive information.


UC can be a tremendous benefit to companies if implemented and configured properly. Companies can receive these advantages without causing undue problems if they take a moment to ensure they are laying the proper groundwork and are aware of what needs to occur once implementation is complete. By following these steps, organizations can avoid many of the pitfalls that can occur.

Thanks to Network Instruments for the article.

How Well Are You Managing Your Network?

Network ManagementRegardless of the industry, companies rely on continued performance of their network with near 100 percent uptime to ensure continuity of services and revenue protection. Whether the organization is operating in retail, manufacturing, financial services or healthcare, operational analytics is essential in reducing downtime and meeting IT performance goals.

This was the emerging consensus in the results of the IT Operations Analytics Benchmark published by Continuity Software. In the realm of network management, IT managers need complete visibility into the entire operation and the right tools to analyze processes, traffic flows and application use to ensure that downtime doesn’t interfere with performance.

The findings from this benchmark survey show that large organizations are the most common users when it comes to analytics tools used to monitor and measure IT performance goals. In fact, 57 percent of the large organizations surveyed rely on these tools, compared with just 29 percent of small companies. Meanwhile, as an indication of operational excellence, 89 percent of organizations measure uptime across most or all IT domains, with just 66 percent measuring performance and 51 percent measuring the number of open issues.

Even with challenges constantly threatening the performance of the network and putting consistent strain on optimal network management, organizations still have to meet their business goals. According to this survey, the best way to do so is to frequently track configuration consistency, as 53 percent have found success with this method. Portions of the infrastructure are checked by between 31 and 33 percent of organizations seeking to optimize network management.

One thing made clear in this benchmark study is that better measurement and analysis tools are required if IT operations hope to achieve operations excellence, as reported by 40 percent of organizations surveyed. As for overall monitoring, storage and network performance tend to rank the highest, as 71 percent of organizations report that they monitor key performance indicators (KPIs) in these areas, while 69 percent monitor and measure applications, 66 percent monitor databases and 49 percent monitor clusters.

A key area of concern, according to this study, is the cloud. Only 14 percent of organizations are measuring cloud KPIs and 43 percent never analyze configuration consistency in their cloud environment. It’s thus quite possible that the performance advantages companies are hoping to achieve with this deployment model could quickly be negated by incorrect configurations and poor oversight.

For the enterprise, this survey demonstrates that there is still work to be done to get the most out of network performance. It’s not enough to watch and wait – proactive strategies need to be put in place and visibility through network management solutions is essential. Anything less will put the network and all supported operations at risk.

Thanks to Network Management Report for the article.

Infosim StableNet® Network Change and Configuration Management

Network infrastructure is evolving at an unprecedented rate and, with mixed vendor environments the norm rather than the exception, management of those systems has become a labour intensive exercise. Unlike fault and performance management, Network Change and Configuration Management (NCCM) has no common harmonised management method or protocols; Even the first level engineering teams have to be proficient in numerous different configuration languages and interfaces for the simplest of tasks.

Infosim’s StableNet® Network Change and Configuration Management Solution (NCCM) has been designed to be a common management platform enabling network infrastructure to be managed in a vendor neutral environment. It is a critical part of any organization’s management infrastructure and a logical next step in the Infosim StableNet® Service Assurance and Fulfilment Solution.

StableNet Network Management Solutions

Infosim StableNet® NCCM delivers key functionality in multi-vendor environments to enable common management techniques to be used: real-time configuration backup and restoration to ensure a complete audit trail of changes, service continuity and fault/performance analysis; process oriented change management enabling common tasks to be packaged into repeatable processes and for more complex changes to be structured; software and firmware upgrade management; configuration policy management to enforce corporate standards and regulatory requirements; vulnerability and end-oflife/ end-of-sales tracking for compliance, asset and financial planning; and a fully flexible reporting engine which delivers the information engineers and management need.

Real-time Configuration Backup, Change Correlation and Configuration Restoration

Automatic detection of changes to device configurations as they happen means Infosim StableNet® NCCM always has the latest configuration files historically versioned in its database—This is regardless of how StableNet Network Management Solutions 2the change was made; e.g. console cable, telnet, ssh, http etc.

With every change recorded as it happens, the time taken to identify, analyse and rectify infrastructure configuration faults is greatly reduced as the operator is immediately given the answers to the key questions: who changed what, how and when?

Configuration files can be compared with historical versions to see what changes have occurred to a device over time, highlighting configuration items that were added, removed or altered.

StableNet Network Management Solutions 3Each configuration backup also stores the hardware information for the device, such as chassis serial numbers and daughter card part numbers as well as the operating system information, image and software modules in use.

With extensible device interaction scripting, StableNet® NCCM has been designed to be a truly vendor-agnostic solution allowing support for new devices quickly and easily.

It has reported by analysts such as Gartner that around 60% of all network outages impacting mission-critical services will be caused by change and configuration issues—the majority of which are small changes that are implemented in the environment all the time regardless of corporate change policies. These incidents have been shown to have a mean-time-to-repair (MTTR) of approximately 200 minutes at an average cost of approximately $42,000 per hour per incident. According to Dunn & Bradstreet, 59% of USA Fortune 500 companies experience downtime a minimum of 1.6 hours per week.

By storing the latest configurations of all devices, Infosim StableNet® NCCM is able to assist in performance troubleshooting, identify the changes that StableNet Network Management Solutions 4occurred and then to roll-back to a previous known good configuration with a simple two-click process.

This simple yet controlled analysis and restoration removes much of the speculation around outages and can immediately identify when the incident is directly correlated to a configuration change or is the responsibility of an external third party supplier. The days of finger-pointing, buck-passing and scratching of heads is over!

Structured Change Management

Engineers will argue that ad-hoc changes will always be a necessary evil of network management and in some respects they are correct. However, ad-hoc changes should usually only be in response to an extraordinary event and not common practice.

Infosim StableNet® NCCM is designed to move the engineering teams towards a structured change process. Small changes can be simply executed with all the security controls within the product. Larger or StableNet Network Management Solutions 5more complex changes can be packaged into simple repeatable actions guaranteeing carbon-copy execution. This packaging allows complex changes to be “written-once” by high level engineers and “runmany” by less specialised staff.

Every action taken by the change management engine is logged for auditing show each device interaction, commands executed and the response from the device.

Configuration Policy Management

StableNet Network Management Solutions 6Many organisations have internal configuration and security policies as well as external regulations and directives to adhere to. Using manual processes it can take months to evaluate every device and to rectify any configuration issues. Of course, as soon as this manual process ends it has to restart!

Infosim StableNet® NCCM has a flexible configuration policy engine enabling allowing device configurations to be compared to a set of policies to identify devices that are in violation. As soon as configuration changes are detected they are immediately analysed for violations.

Configuration policies bring together a set of devices and applies a set of rules. These rules can be based on simple text strings to find items present or missing in configuration files; powerful configuration snippets with ‘section’ matching and regular expression searching; or advanced scripting languages.

Uniquely, the same rule can be created for different vendor hardware using the same identifier meaning an organisation can create a single corporate policy within Infosim StableNet® NCCM to reflect all hardware vendor equipment simplifying reports into a single view.

Vulnerability Management

Not a day passes without another set of vulnerability announcements being issued from the hardware manufacturers. It is impractical for organisations to check their estate for vulnerabilities using a manual StableNet Network Management Solutions 7process as it would be a never ending task! Infosim StableNet® NCCM has a comprehensive vulnerability scanning engine that permits a user to enter the details from the announcements, run a complete check against all the devices in the estate and report on the current status.

To enable focused identification, Infosim StableNet® NCCM can use extra snapshot device interaction commands to collect real-time information regarding device configuration. This enables identification of devices that are only vulnerable if certain configurations are in use thus reducing the number of false positives.

End-of-life and End-of-service Management

StableNet Network Management Solutions 8Similar to vulnerabilities, End-of-Life and End-of-Service announcements are issued on a nearly daily basis and these announcements cover not only hardware platforms but also sub-components such as modules and software operating systems.

Infosim StableNet® NCCM has the ability to check against all these parts using the rich device information collected and can report accordingly allowing for financial and hardware refresh planning and risk assessment analysis.

Subscription Services from Infosim

Infosim also offer subscription based update services for vulnerability and end-of-life/end-of service announcements. These updates are supplied electronically ready to install, through a simple import function, and include any extra components such as snapshot device interaction commands as required.

Thanks to Infosim for the article.