Solving Videoconferencing’s Top 3 Challenges

2013-aug-banner

We asked 170 network professionals in our most recent State of the Network about the unified communications applications their organizations had deployed, and 62 percent indicated videoconferencing. It’s safe to say video is now mainstream, and with widespread adoption the real-time nature of video brings real-time consequences to user and network teams when performance degrades. Ensuring a positive user experience requires considering the types of video being deployed, the top 3 real-time challenges of managing video, and solutions.

Digging in Video Deployments

2013-aug-graph-vid-e

Top 3 Video Challenges

2013-aug-graph-vid-c

Deployments are clearly most pervasive in both desktop rollouts and deploying standard videoconferencing. More elaborate and experiential forms of video remained steady with about one in five organizations having deployed telepresence solutions. These numbers are being driven by the needs of organizations to reign in travel budgets and foster greater and faster collaboration between employees.

In last year’s State of the Network, the lack of user knowledge and training was cited as the primary challenge managing videoconferencing. However, as the technology has matured, problems more commonly associated with the network management aspects of video have risen to the top in 2013. The difference in managing these network-based challenges with video, as opposed to other applications, is the real-time nature of this UC service. Even minimal quality issues can be incredibly disruptive. Essentially, ensuring successful video conferencing means your network team needs to bring its A-Game.

1) Implementing and Measuring QoS

A significant difference between VoIP and videoconferencing is the amount of traffic generated. This means that network Quality of Service (QoS) class definitions and bandwidth allocations must be reevaluated before deploying videoconferencing.

Organizations often find that setting aside 10 percent of bandwidth for VoIP is sufficient, but to accommodate even moderate rates of concurrent videoconferencing sessions will require 30 percent or more. The potential negative implications extend well beyond bandwidth consumption – providing latency-sensitive video traffic with increased precedence raises the likelihood of contention among other applications for remaining network resources, so it could not only affect the video quality itself, but also other applications.

2) Allocating and Monitoring Bandwidth

Successfully monitoring and allocating enough bandwidth for videoconferencing has both an immediate and long-term component. In terms of ensuring short-term success, verify that your monitoring solutions provide support for UC vendor platforms on the network. Evaluate bandwidth being consumed for videoconferencing for the entire organization and per user. In addition, because of the real-time nature of video, assess whether sudden spikes from other applications might be impacting video quality.

In the long-term, utilizing behavior analysis features like baselining to identify any changes in video application behavior is critical for spotting performance degradation trends before they impact the user. Additionally, understanding the amount of video bandwidth consumed per user and how this is changing allows you to anticipate future bandwidth needs.

3) Lack of Monitoring Tools and Metrics

There are a mix of both old and new metrics that can be utilized to assess video delivery and quality. IT teams typically rely on latency, packet loss, and jitter as indicators of the network’s ability to support quality video. Specific to videoconferencing are metrics designed to reflect aggregate audio/video experiential quality, such as Video MOS (V-MOS). While not based on an industry standard (as is MOS, used with VoIP monitoring), it can be of great value if applied consistently to video traffic.

The second question raised by this challenge is, “can I use my existing tools to monitor videoconferencing, or do I need to purchase a new solution?” Observer users are in luck, as the platform provides extensive multi-vendor videoconferencing support and analysis. If you use other monitoring solutions, validate their ability to provide in-depth videoconferencing delivery and quality metrics, expert analytics for UC applications, multi-vendor support, and the ability to view videoconferencing traffic alongside other applications.

By implementing correct QoS policies, assessing your monitoring solution’s support of videoconferencing applications, utilizing video quality metrics, and actively tracking bandwidth use, you can feel confident in your ability to actively meet user expectations with smooth video calls. For additional details on how to ensure quality video on your network, check out the following resources:

Thanks to Network Instruments for the article.

Advertisements

Turn IT Chaos into Order

The three key areas in managing an enterprise network today is monitoring Performance Management, Fault Management and Configuration Management. To meet these requirements many departments use three different tools, that create silo effect, with each department using their own tool and there is no integration between the tools. With consolidation of tools becoming a key requirement a solution that has an integrated architecture with one user interface that gives you visibility into each of these three performance criteria.

Network Discovery

infosim stablenet network topology

When starting to automate your network monitoring the first key step is to understand what is in the network and how it is being used. A discovery needs to do more than list the devices you have, but also how are the devices are interacting with each other.

Performance Management

Once you have a picture of what is in the network we can then start to understand how it is working. Performance management is no longer only about how we are moving packets of data around or utilization, but also includes Servers, Applications and business processes

inforsim stablenet network applicationsTo see how the systems and services are performing we need the most accurate information from the environment. You need to collect different protocols to ensure the widest possible source of information for each metric. You can supplement this with active testing so you can understand the performance between devices and integrating this with NetFlow information will allow you to predict when certain thresholds will be reached

Many Enterprise Network Teams still regard Servers, Applications and networks as separate teams needing separate management tools. Since these systems are separate and as issues arise, we spend more time defending our turf vs. finding the solution

Server Monitoring

To stop this you need to capture detailed information from the servers, so that you can correlate values between the network and server components. Monitoring the application layer within the server, allows you to analyse layer 1 through 7 and into the application processes themselves, for real performance monitoring.

Fault management

So now that we have all the information, about the devices and how they interact under normal conditions, we can now start to look at error events, and discover the “Root Cause” and develop an impact analysis by mapping servers, applications and there current state to a service overview

infosim stablenet reporting

Reporting

All this information is useless unless you can communicate it to others, so having flexible reports, and dashboards that everyone can use is a must

Network Configuration and Change Management

We have all the information about our devices and how they are deployed, so the final piece is how they are configured, and how that affects operations.

Network Configuration Management

The first step is configuration management, is getting a complete backup of every device and storing that information in the database. You can then view any configuration file differences in a side by side comparison, and to be able to restore any configurations, should you notice a change

Configuration Change Management and Auditing

You then need to control change and audit any changes on the devices you need to be able to detect real time configuration changes, and ensure we maintain the latest configuration.

Configuration Policy Management

Once we have the current configuration of the devices we can then start to build policies.

NCCM – Vulnerability Management –

Being able to check against announcement form manufactures, and be able to check every component of the configuration, hardware and software to highlight vulnerabilities

NCCM – End of Life or Service Management –

These devices can expose networks to risk, and knowing when these devices will be end of life is critical to asset planning for the future.

So, look for a single unified management system that will allow you to manage the performance of the network, detect faults and ensure all the devices are configured properly. Ask about your options to move to a next generation performance management system

Telus expands into Far North

Canadian full-service telecoms group Telus has announced the expansion of its services into Canada’s Far North regions for the first time. Customers in the Northwest Territories and Yukon will be able to sign up for Telus services starting 6 September, while further expansion will be announced in due course. Petron Communications, a long-time Telus authorized dealer serving rural communities in northern British Columbia, will open stores in Yellowknife and Whitehorse later in 2013. Telus’s CEO Darren Entwistle said: ‘Telus is tremendously proud to offer its full suite of wireless products and services to residents of the Northwest Territories and Yukon. For the very first time, starting 6 September, you will be able to purchase a smartphone or other device from Telus and activate a local Yellowknife or Whitehorse phone number.

Thanks to TeleGeography for the article.

Net Optics Launches Updated VMware-Compatible Solutions that Optimize Monitoring and Security in Virtual and SDN Environments

Phantom

The Phantom Virtualization Tap 3.0 supports ESXi 5.x at the kernel and is fully integrated with vCenter5.x. Also launching Phantom HD 3.0, promoting peak efficiency of monitoring tools.

Net Optics, Inc., the leading provider of Total Application and Network Visibility solutions, announces the general availability of two significant advances. Net Optics will be demonstrating the solutions at VMworld booth # 523.

Phantom Virtualization Tap™ Version 3.0 for ESXi 5.x

  • Extends kernel level monitoring to ESXi 5.0 and 5.1
  • Enables intelligent, continuous monitoring through deep VMware vCenter integration
  • Offers intuitive, click-through deployment and administration with a newly designed GUI and management console

Phantom HD™ Version 3.0

  • Enables high-density, high-throughput network monitoring in environments using advanced tunneling protocols
  • Performs traffic deduplication to ensure that tools inspect only a single copy of each relevant session
  • Decapsulates and strips protocols to allow optimal tool utilization—both physical and virtual

Phantom Virtualization Tap Version 3.0 reinforces Net Optics’ market leadership by enriching its breakthrough solution with kernel-level monitoring for vSphere 5.x. This highly anticipated capability offers highly efficient, non-disruptive, VMware-certified software for monitoring traffic in virtual environments. This solution is a major resource for customers running VMware ESXi 5.x who require visibility into East-West traffic between virtual servers. Phantom 3.0 addresses the growing “black hole,” which keeps inter-VM and cross-blade traffic invisible to network monitoring tools, leaving the network vulnerable to threats, non-compliance, loss of availability and impaired performance.

The Phantom Virtualization Tap aggregates traffic from multiple VMs and delivers raw network data to monitoring tools. “Our latest version strengthens control of the virtual environment substantially, helping customers stay secure and audit reliably for compliance,” says Ran Nahmias, Net Optics Senior Director, Virtualization & Cloud Solutions. “This open community Tap is v-switch and tool agnostic. It reinforces and extends the value of our customers’ physical tool investments while requiring no changes and creating no single point of failure.”

Phantom HD 3.0 incorporates advanced new functionality, performing packet management, tunnel decapsulation and network traffic management, all on a single device that addresses not only virtualized/converged environments but physical environments as well.

Phantom HD 3.0 aggregates inter-VM traffic that has been tunneled out of ESX hosts encapsulated in sophisticated new protocols. The drawback to those protocols is that they often make traffic invisible to monitoring tools—laying the network open to threats and intrusion. The purpose-built Phantom HD appliance swiftly decapsulates that traffic and sends it on in raw form to the tools, which can now perform their vital security functions unimpeded.

Phantom HD 3.0 also resolves a persistent concern of customers whose networks are virtualizing or whose architecture employs complex tunneling technologies—namely removal of duplicate traffic captured in various areas of the network. Phantom HD 3.0 deduplicates and reduces the costly “packet payload overhead” placed on these tools, optimizing their performance and value.

“This solution eliminates feeding customer tools duplicate captures by “de-duping” the traffic,” says Nahmias. “Now the tools are able to process only a single copy of that traffic of interest—optimizing and preserving a customer’s network and tooling resources.”

Both Phantom Virtualization Tap 3.0 and Phantom HD 3.0 are now available and shipping. Phantom HD ships as a physical or virtual appliance, deployable in all areas of the hybrid data center.

Thanks to Net Optics for the article.

The New Reality of the Application Landscape

Capacity analysis for your multi-tiered applications

Fotolia 51064258 XS-300x300 1Now, more than ever, IT shops need to deliver on the promise of business innovation and an exceptional end-user experience. The race to deliver this experience to customers is a never-ending marathon of who can be the first out of the gate while meeting the ever-demanding expectations of today’s “I want it now” user.

And, not making it any easier is that fact that today’s business services have evolved into very complex environments. In the old world, applications ran within an enterprise, even within the same rack of servers. Today, the places where a single business service is executed can span hardware platforms, hypervisors and geographical locations – leaving IT shops to wonder, “what went where, when and what happened?”

What is needed is one end-to-end capacity planning solution that understands the scalability of your supportive infrastructure – from the mainframe to distributed and even into the cloud. You can’t be expected to deliver on service levels for an application that spans all these platforms without first understanding their current utilization levels as well as capacity limitations. This understanding provides you the insight needed to predict future behavior of that application when change occurs like, demand increases, hardware upgrades or new application rollouts.

You have to be able to do the following to support successful business initiatives:

  • Prevent performance problems caused by increased demand for IT services
  • Communicate to the business how business growth will impact infrastructure utilization
  • Determine infrastructure capacity required for business and service growth scenarios

However, doing capacity planning on a complex business service that spans the back end mainframe as well as the distributed environment can be a challenge. They are completely different animals and as such need to be understood and approached differently.

One capacity management challenge that many IT shops struggle with today is “When we see growth on our distributed tiers of our application environments – does it equate to the same growth on the back end mainframe?”

Well, find out the answer to that exact question in this video:

capacity-planning-video

Thanks to Service Assurance Daily for the article.

Bell’s TV everywhere app expands mobile TV offering

Bell Canada has launched a Bell TV mobile application allowing customers to watch channels which they subscribe to in their Bell home programming package on tablets or smartphones, at no extra charge, reports Rapid TV News. The ‘TV everywhere’ app expands the channel line-up of the existing Bell Mobile TV service, enabling Bell Mobility customers to watch more than 100 live and on-demand channels over the operator’s mobile broadband network or Wi-Fi, with current Bell Mobile TV subscribers being automatically upgraded to the Bell TV app (with access to the full range of channels requiring subscription). The new app also offers a programming guide and search interface, while fibre-based Bell IPTV subscribers can stop and resume on-demand programming between their TV at home and a mobile device. The Bell TV app is available for Android, BlackBerry and iOS (Apple) devices.

Thanks to TeleGeography for the article.

Is Your Network Security Built on a House of Cards?

house-of-cards

At this year’s RSA Conference in San Francisco, Net Optics conducted a survey of attendees to identify what is the state of today’s network security. The 497 respondents included a wide range of job responsibilities from Chief Security Officers to network and system administrators. Participants were asked about their security posture, the tools they use on the network, and the threats their networks face. We’ve identified a few key points from the collected data in the infographic below.

The Need for the Network Visibility Layer is Clear

One clear point that comes across is the need for greater visibility of the network in order to provide greater security. The common theme among respondents is that the lack of visibility is impairing their ability to deal with today’s security threats.

Security threats are on the rise, and awareness is growing.

Fast-Evolving Threats

  • 68% of respondents see increase in security threats.

Point of Failure

  • 67% see current security tools as a Point of Failure.

Lack of Visibility

  • 63% worry that limited visibility threatens their security

Do you have a solid SECURITY STRATEGY?

Get a winning hand with total visibility, end-to-end monitoring and swift problem detection. Defend your critical web resources and safeguard sensitive data with powerful Network Visibility Solutions.

      1. DEFENDING AGAINST FAST-EVLOVING THREATS

  • A Network Packet Broker improves the security of your network by increasing your control over the data and packets that your security tools see. This control improves the efficiency of your secdiruty monitoring devices, ensuring their effectiveness and availability in dealing with today’s evolving threat landscape.

      2.  ELIMINATE POINTS OF FAILURE CREATED BY YOUR INLINE NETWORK SECURITY TOOLS

  • A Bypass Switch adds additional fail-open/fail-close connectivity to your inline network monitoring tools (i.e., Intrusion Prevention System). Eliminating your security tool as a possible point of failure means less network downtime due to oversubscribed or incorrectly configured devices.

      3.  INCREASING NETWORK VISIBILITY FOR NETWORK SECURITY MONITORING

  • A Network Tap gives you the availability to connect your security monitoring tools (IDS/DLP) exactly where you need it to gain the visibility your tools need. 100% visibility of your data is critical to ensuring 100% security of your network.

Thanks to Net Optics for the article.