Solving Videoconferencing’s Top 3 Challenges


We asked 170 network professionals in our most recent State of the Network about the unified communications applications their organizations had deployed, and 62 percent indicated videoconferencing. It’s safe to say video is now mainstream, and with widespread adoption the real-time nature of video brings real-time consequences to user and network teams when performance degrades. Ensuring a positive user experience requires considering the types of video being deployed, the top 3 real-time challenges of managing video, and solutions.

Digging in Video Deployments


Top 3 Video Challenges


Deployments are clearly most pervasive in both desktop rollouts and deploying standard videoconferencing. More elaborate and experiential forms of video remained steady with about one in five organizations having deployed telepresence solutions. These numbers are being driven by the needs of organizations to reign in travel budgets and foster greater and faster collaboration between employees.

In last year’s State of the Network, the lack of user knowledge and training was cited as the primary challenge managing videoconferencing. However, as the technology has matured, problems more commonly associated with the network management aspects of video have risen to the top in 2013. The difference in managing these network-based challenges with video, as opposed to other applications, is the real-time nature of this UC service. Even minimal quality issues can be incredibly disruptive. Essentially, ensuring successful video conferencing means your network team needs to bring its A-Game.

1) Implementing and Measuring QoS

A significant difference between VoIP and videoconferencing is the amount of traffic generated. This means that network Quality of Service (QoS) class definitions and bandwidth allocations must be reevaluated before deploying videoconferencing.

Organizations often find that setting aside 10 percent of bandwidth for VoIP is sufficient, but to accommodate even moderate rates of concurrent videoconferencing sessions will require 30 percent or more. The potential negative implications extend well beyond bandwidth consumption – providing latency-sensitive video traffic with increased precedence raises the likelihood of contention among other applications for remaining network resources, so it could not only affect the video quality itself, but also other applications.

2) Allocating and Monitoring Bandwidth

Successfully monitoring and allocating enough bandwidth for videoconferencing has both an immediate and long-term component. In terms of ensuring short-term success, verify that your monitoring solutions provide support for UC vendor platforms on the network. Evaluate bandwidth being consumed for videoconferencing for the entire organization and per user. In addition, because of the real-time nature of video, assess whether sudden spikes from other applications might be impacting video quality.

In the long-term, utilizing behavior analysis features like baselining to identify any changes in video application behavior is critical for spotting performance degradation trends before they impact the user. Additionally, understanding the amount of video bandwidth consumed per user and how this is changing allows you to anticipate future bandwidth needs.

3) Lack of Monitoring Tools and Metrics

There are a mix of both old and new metrics that can be utilized to assess video delivery and quality. IT teams typically rely on latency, packet loss, and jitter as indicators of the network’s ability to support quality video. Specific to videoconferencing are metrics designed to reflect aggregate audio/video experiential quality, such as Video MOS (V-MOS). While not based on an industry standard (as is MOS, used with VoIP monitoring), it can be of great value if applied consistently to video traffic.

The second question raised by this challenge is, “can I use my existing tools to monitor videoconferencing, or do I need to purchase a new solution?” Observer users are in luck, as the platform provides extensive multi-vendor videoconferencing support and analysis. If you use other monitoring solutions, validate their ability to provide in-depth videoconferencing delivery and quality metrics, expert analytics for UC applications, multi-vendor support, and the ability to view videoconferencing traffic alongside other applications.

By implementing correct QoS policies, assessing your monitoring solution’s support of videoconferencing applications, utilizing video quality metrics, and actively tracking bandwidth use, you can feel confident in your ability to actively meet user expectations with smooth video calls. For additional details on how to ensure quality video on your network, check out the following resources:

Thanks to Network Instruments for the article.

Turn IT Chaos into Order

The three key areas in managing an enterprise network today is monitoring Performance Management, Fault Management and Configuration Management. To meet these requirements many departments use three different tools, that create silo effect, with each department using their own tool and there is no integration between the tools. With consolidation of tools becoming a key requirement a solution that has an integrated architecture with one user interface that gives you visibility into each of these three performance criteria.

Network Discovery

infosim stablenet network topology

When starting to automate your network monitoring the first key step is to understand what is in the network and how it is being used. A discovery needs to do more than list the devices you have, but also how are the devices are interacting with each other.

Performance Management

Once you have a picture of what is in the network we can then start to understand how it is working. Performance management is no longer only about how we are moving packets of data around or utilization, but also includes Servers, Applications and business processes

inforsim stablenet network applicationsTo see how the systems and services are performing we need the most accurate information from the environment. You need to collect different protocols to ensure the widest possible source of information for each metric. You can supplement this with active testing so you can understand the performance between devices and integrating this with NetFlow information will allow you to predict when certain thresholds will be reached

Many Enterprise Network Teams still regard Servers, Applications and networks as separate teams needing separate management tools. Since these systems are separate and as issues arise, we spend more time defending our turf vs. finding the solution

Server Monitoring

To stop this you need to capture detailed information from the servers, so that you can correlate values between the network and server components. Monitoring the application layer within the server, allows you to analyse layer 1 through 7 and into the application processes themselves, for real performance monitoring.

Fault management

So now that we have all the information, about the devices and how they interact under normal conditions, we can now start to look at error events, and discover the “Root Cause” and develop an impact analysis by mapping servers, applications and there current state to a service overview

infosim stablenet reporting


All this information is useless unless you can communicate it to others, so having flexible reports, and dashboards that everyone can use is a must

Network Configuration and Change Management

We have all the information about our devices and how they are deployed, so the final piece is how they are configured, and how that affects operations.

Network Configuration Management

The first step is configuration management, is getting a complete backup of every device and storing that information in the database. You can then view any configuration file differences in a side by side comparison, and to be able to restore any configurations, should you notice a change

Configuration Change Management and Auditing

You then need to control change and audit any changes on the devices you need to be able to detect real time configuration changes, and ensure we maintain the latest configuration.

Configuration Policy Management

Once we have the current configuration of the devices we can then start to build policies.

NCCM – Vulnerability Management –

Being able to check against announcement form manufactures, and be able to check every component of the configuration, hardware and software to highlight vulnerabilities

NCCM – End of Life or Service Management –

These devices can expose networks to risk, and knowing when these devices will be end of life is critical to asset planning for the future.

So, look for a single unified management system that will allow you to manage the performance of the network, detect faults and ensure all the devices are configured properly. Ask about your options to move to a next generation performance management system

Telus expands into Far North

Canadian full-service telecoms group Telus has announced the expansion of its services into Canada’s Far North regions for the first time. Customers in the Northwest Territories and Yukon will be able to sign up for Telus services starting 6 September, while further expansion will be announced in due course. Petron Communications, a long-time Telus authorized dealer serving rural communities in northern British Columbia, will open stores in Yellowknife and Whitehorse later in 2013. Telus’s CEO Darren Entwistle said: ‘Telus is tremendously proud to offer its full suite of wireless products and services to residents of the Northwest Territories and Yukon. For the very first time, starting 6 September, you will be able to purchase a smartphone or other device from Telus and activate a local Yellowknife or Whitehorse phone number.

Thanks to TeleGeography for the article.

Net Optics Launches Updated VMware-Compatible Solutions that Optimize Monitoring and Security in Virtual and SDN Environments


The Phantom Virtualization Tap 3.0 supports ESXi 5.x at the kernel and is fully integrated with vCenter5.x. Also launching Phantom HD 3.0, promoting peak efficiency of monitoring tools.

Net Optics, Inc., the leading provider of Total Application and Network Visibility solutions, announces the general availability of two significant advances. Net Optics will be demonstrating the solutions at VMworld booth # 523.

Phantom Virtualization Tap™ Version 3.0 for ESXi 5.x

  • Extends kernel level monitoring to ESXi 5.0 and 5.1
  • Enables intelligent, continuous monitoring through deep VMware vCenter integration
  • Offers intuitive, click-through deployment and administration with a newly designed GUI and management console

Phantom HD™ Version 3.0

  • Enables high-density, high-throughput network monitoring in environments using advanced tunneling protocols
  • Performs traffic deduplication to ensure that tools inspect only a single copy of each relevant session
  • Decapsulates and strips protocols to allow optimal tool utilization—both physical and virtual

Phantom Virtualization Tap Version 3.0 reinforces Net Optics’ market leadership by enriching its breakthrough solution with kernel-level monitoring for vSphere 5.x. This highly anticipated capability offers highly efficient, non-disruptive, VMware-certified software for monitoring traffic in virtual environments. This solution is a major resource for customers running VMware ESXi 5.x who require visibility into East-West traffic between virtual servers. Phantom 3.0 addresses the growing “black hole,” which keeps inter-VM and cross-blade traffic invisible to network monitoring tools, leaving the network vulnerable to threats, non-compliance, loss of availability and impaired performance.

The Phantom Virtualization Tap aggregates traffic from multiple VMs and delivers raw network data to monitoring tools. “Our latest version strengthens control of the virtual environment substantially, helping customers stay secure and audit reliably for compliance,” says Ran Nahmias, Net Optics Senior Director, Virtualization & Cloud Solutions. “This open community Tap is v-switch and tool agnostic. It reinforces and extends the value of our customers’ physical tool investments while requiring no changes and creating no single point of failure.”

Phantom HD 3.0 incorporates advanced new functionality, performing packet management, tunnel decapsulation and network traffic management, all on a single device that addresses not only virtualized/converged environments but physical environments as well.

Phantom HD 3.0 aggregates inter-VM traffic that has been tunneled out of ESX hosts encapsulated in sophisticated new protocols. The drawback to those protocols is that they often make traffic invisible to monitoring tools—laying the network open to threats and intrusion. The purpose-built Phantom HD appliance swiftly decapsulates that traffic and sends it on in raw form to the tools, which can now perform their vital security functions unimpeded.

Phantom HD 3.0 also resolves a persistent concern of customers whose networks are virtualizing or whose architecture employs complex tunneling technologies—namely removal of duplicate traffic captured in various areas of the network. Phantom HD 3.0 deduplicates and reduces the costly “packet payload overhead” placed on these tools, optimizing their performance and value.

“This solution eliminates feeding customer tools duplicate captures by “de-duping” the traffic,” says Nahmias. “Now the tools are able to process only a single copy of that traffic of interest—optimizing and preserving a customer’s network and tooling resources.”

Both Phantom Virtualization Tap 3.0 and Phantom HD 3.0 are now available and shipping. Phantom HD ships as a physical or virtual appliance, deployable in all areas of the hybrid data center.

Thanks to Net Optics for the article.

The New Reality of the Application Landscape

Capacity analysis for your multi-tiered applications

Fotolia 51064258 XS-300x300 1Now, more than ever, IT shops need to deliver on the promise of business innovation and an exceptional end-user experience. The race to deliver this experience to customers is a never-ending marathon of who can be the first out of the gate while meeting the ever-demanding expectations of today’s “I want it now” user.

And, not making it any easier is that fact that today’s business services have evolved into very complex environments. In the old world, applications ran within an enterprise, even within the same rack of servers. Today, the places where a single business service is executed can span hardware platforms, hypervisors and geographical locations – leaving IT shops to wonder, “what went where, when and what happened?”

What is needed is one end-to-end capacity planning solution that understands the scalability of your supportive infrastructure – from the mainframe to distributed and even into the cloud. You can’t be expected to deliver on service levels for an application that spans all these platforms without first understanding their current utilization levels as well as capacity limitations. This understanding provides you the insight needed to predict future behavior of that application when change occurs like, demand increases, hardware upgrades or new application rollouts.

You have to be able to do the following to support successful business initiatives:

  • Prevent performance problems caused by increased demand for IT services
  • Communicate to the business how business growth will impact infrastructure utilization
  • Determine infrastructure capacity required for business and service growth scenarios

However, doing capacity planning on a complex business service that spans the back end mainframe as well as the distributed environment can be a challenge. They are completely different animals and as such need to be understood and approached differently.

One capacity management challenge that many IT shops struggle with today is “When we see growth on our distributed tiers of our application environments – does it equate to the same growth on the back end mainframe?”

Well, find out the answer to that exact question in this video:


Thanks to Service Assurance Daily for the article.

Bell’s TV everywhere app expands mobile TV offering

Bell Canada has launched a Bell TV mobile application allowing customers to watch channels which they subscribe to in their Bell home programming package on tablets or smartphones, at no extra charge, reports Rapid TV News. The ‘TV everywhere’ app expands the channel line-up of the existing Bell Mobile TV service, enabling Bell Mobility customers to watch more than 100 live and on-demand channels over the operator’s mobile broadband network or Wi-Fi, with current Bell Mobile TV subscribers being automatically upgraded to the Bell TV app (with access to the full range of channels requiring subscription). The new app also offers a programming guide and search interface, while fibre-based Bell IPTV subscribers can stop and resume on-demand programming between their TV at home and a mobile device. The Bell TV app is available for Android, BlackBerry and iOS (Apple) devices.

Thanks to TeleGeography for the article.

Is Your Network Security Built on a House of Cards?


At this year’s RSA Conference in San Francisco, Net Optics conducted a survey of attendees to identify what is the state of today’s network security. The 497 respondents included a wide range of job responsibilities from Chief Security Officers to network and system administrators. Participants were asked about their security posture, the tools they use on the network, and the threats their networks face. We’ve identified a few key points from the collected data in the infographic below.

The Need for the Network Visibility Layer is Clear

One clear point that comes across is the need for greater visibility of the network in order to provide greater security. The common theme among respondents is that the lack of visibility is impairing their ability to deal with today’s security threats.

Security threats are on the rise, and awareness is growing.

Fast-Evolving Threats

  • 68% of respondents see increase in security threats.

Point of Failure

  • 67% see current security tools as a Point of Failure.

Lack of Visibility

  • 63% worry that limited visibility threatens their security

Do you have a solid SECURITY STRATEGY?

Get a winning hand with total visibility, end-to-end monitoring and swift problem detection. Defend your critical web resources and safeguard sensitive data with powerful Network Visibility Solutions.


  • A Network Packet Broker improves the security of your network by increasing your control over the data and packets that your security tools see. This control improves the efficiency of your secdiruty monitoring devices, ensuring their effectiveness and availability in dealing with today’s evolving threat landscape.


  • A Bypass Switch adds additional fail-open/fail-close connectivity to your inline network monitoring tools (i.e., Intrusion Prevention System). Eliminating your security tool as a possible point of failure means less network downtime due to oversubscribed or incorrectly configured devices.


  • A Network Tap gives you the availability to connect your security monitoring tools (IDS/DLP) exactly where you need it to gain the visibility your tools need. 100% visibility of your data is critical to ensuring 100% security of your network.

Thanks to Net Optics for the article.

Verizon may not move for Canadian cellcos; Wind might bid for Mobilicity

Canadian newspaper The Globe & Mail reported yesterday that US giant Verizon was putting off its previously publicised potential acquisitions of second-tier Canadian cellcos Wind Mobile and Mobilicity while contemplating taking part in Ottawa’s January 2014 700MHz 4G wireless spectrum licence auction. The deadline for initial bids is 17 September 2013, and if Verizon does decide to participate, rules prevent it from continuing negotiations on acquisition deals until after the auction has finished. In response to the news, Wind Mobile’s CEO and chairman Anthony Lacavera reiterated a previously stated possibility that his company could make a takeover bid for Mobilicity – if it does not have to bid against Verizon. Lacavera said that the latest report of Verizon’s intentions means that Wind would consider buying Mobilicity to narrow down the competition in the 4G auction while also creating a stronger competitor. ‘It puts me in a position to move on Mobilicity before the auction,’ Lacavera stated, while adding: ‘I can’t outbid Verizon. So if they really want to buy it before the auction I’m not going to be successful. It’s just a mathematical reality.

Thanks to TeleGeography for the article.

SDN security strategies for network attack prevention

Software-defined networking (SDN) technology pulls a network’s control plane into a dedicated SDN controller, which manages and mediates all functions and services on virtual and physical networks. Because of this separation and control, SDN security strategies offer a deeper level of granularity to packet analysis, network monitoring and traffic control that will go a long way in preventing network attacks.

The rise of software-defined monitoring

Recently Microsoft revealed that it is internally using a homegrown OpenFlow-based network tap aggregation platform (dubbed Distributed Ethernet Monitoring, or DEMON). The tool is aimed to tackle the huge volume of traffic within Microsoft’s cloud network. Previously, the thousands of individual connections and flows were entirely too much for traditional taps and capture mechanisms like SPAN or mirror ports to handle.

By programming flexible switches and other network devices to act as packet interception and redirection platforms, security teams can potentially detect and mitigate a variety of attacks that are commonly seen today. Many industry sources are referring to SDN-driven security analysis as software-defined monitoring (SDM). In SDM, SDN switches can act as packet brokers and controllers can aid in monitoring and analysis.

Using SDN for security monitoring and packet analysis

To start with, relatively inexpensive commodity SDN-programmable switches from vendors like IBM, Juniper, HP and Arista Networks can be used to take the place of more expensive packet brokers. Similar to the Microsoft use case, large numbers of individual connections and flows can be aggregated and collectively sent to multiple security packet capture and analysis platforms. A first layer of switches could be used for capture and packet routing, while a second (and potentially third) layer would be used for terminating monitoring ports from the first layer. These switches could also potentially aggregate traffic and send flow and statistical data to other monitoring devices and platforms.

An OpenFlow-compatible (preferably sFlow-compatible as well) SDN controller, such as the Big Switch Controller, can be used to program and manage multiple SDN-compatible switches. Meanwhile, security monitoring overlay software products like Big Switch’s Big Tap, enable engineers to program more granular filtering and port assignment capabilities to emulate traditional tap functionality in the SDN switches.

Within this context, multiple layers of packet analysis tools can receive traffic from the SDM ports. The SDM ports can serve hardware tools, such as packet brokers and network forensics devices, or software-based protocol analyzers, such as Wireshark.

How SDN security strategies tackle network attack prevention

SDN offers a new level of network visibility even in the most complex environment. As a result, controllers and switches are able to identify various packet attributes. This allows for automated blocking or offloading of traffic in Denial of Service (DoS) attacks, for example. Indeed, SDN can take a number of attacks, including:

  1. Volumetric attacks, such as SYN floods: These attacks consist of huge quantities of TCP packets with only the SYN flags set. This can clog bandwidth, and also fill up connection queues on particular systems that may be targeted. SDN-programmed switches may be able to act as a first line of defense in identification of particular patterns and thresholds of packet volume from a single source or multiple sources within a particular timeframe. These switches can then drop the traffic or redirect it using other techniques and protocols. Most routers and other network platforms lack this level of granular control.
  2. Application and service-specific attacks: These attacks target Web services with very particular series of HTTP requests (using specific user-agent strings with specific cookie variables and the like). SDN devices can identify, log and discard these requests.
  3. DDoS attacks targeting protocol behavior: These attacks fill network device state tables, but SDN devices can identify this behavior based on flow timing and connection limits.

Additionally, SDN can emulate many basic firewall functions. Controllers can execute scripts and commands that can quickly update MAC and IP address and port filtering, allowing for rapid response and updates to traffic policies and rules. This also frees up other network devices from handling large quantities of traffic.

This just begins to scratch the surface of SDN security capabilities. With the ability to handle much larger quantities of traffic, honing in on specific packet attributes, network security analysts can do much more than basic packet filtering and DDoS detection. More advanced intrusion detection and incident response use cases are not only possible, but also likely.

Thanks to SearchSDN for the article.

Verizon delays acquisitions of Canadian wireless firms: report

Verizon Communications Inc has decided to put off the acquisition of two small Canadian wireless companies until after a government auction of wireless licenses in January, a Canadian newspaper said, citing people familiar with the matter.

The U.S. telecommunications firm is instead focusing on whether to take part in a key Canadian spectrum auction, which has a September 17 deadline for applications, the Globe and Mail said. Potential bidders in the auction are barred from negotiating any deals with other bidders until next year.

The newspaper said it was not clear what prompted the change in strategy and whether this signaled Verizon was less enamored about entering the Canadian market or whether it wanted to drive down the prices for the firms Wind Mobile and Mobilicty.

Verizon had tabled a $700 million preliminary offer for Wind Mobile and signed a non-disclosure agreement with Mobilicty in recent months, it said.

This month Rogers Communications Inc , Canada’s largest wireless company, attempted to thwart Verizon’s entry into the country by backing a private equity bid for the two carriers.

In July, Reuters reported that Mobilicity, legally known as Data & Audio-Visual Enterprises Holdings Inc, was in talks with Verizon, among others, in connection with a potential acquisition.

Representatives for Verizon, Wind Mobile and Mobilicity could not be reached for comment outside normal business hours.

Thanks to Yahoo for the article.