How To Ensure That Your VoIP Setup Delivers ROI


The principles of managing VoIP performance

The use of VoIP in the corporate world is growing rapidly, driven by a combination of increasingly mature technology and a desire to reduce costs.

A single network infrastructure should enable organizations to reduce capital expenditure and create a more homogeneous environment that is easier to maintain, monitor and manage.

However, using the network to transport voice as well as data naturally reduces the amount of traffic it can support.

Moving to VoIP gives excellent results when executed properly, but requires careful planning and the right tools to avoid poor performance and reduced efficiency. If call quality is poor, users simply won’t use it.

Planning a VoIP Implementation

There are two key points to consider when planning a VoIP implementation: the increasing capacity demands, and the nature of packetized voice traffic, which affects both voice quality and bandwidth use.

All packets are subject to latency, jitter and loss as they traverse the network. Data packets use TCP, which is connection oriented. If there is a delay, or receipt is not acknowledged, the protocol times out and data is resent, so events go unnoticed or have minimal impact.

In contrast, VoIP utilizes UDP, which is inherently connectionless. If a packet is lost, or delivery is taking too long, the sender has no mechanism to resend or adjust the rate at which data is sent.

Packet loss of more than 5% will start to affect voice quality. As a result latency, jitter and packet loss can have a devastating effect on call quality, rendering conversations unintelligible.

VoIP’s inability to adjust for network conditions also means that it uses whatever bandwidth is available. TCP can and will adjust, so if VoIP is using a large proportion of the bandwidth TCP traffic will see low availability and applications will slow down.

Adding a significant number of VoIP users can impact utilization of network segments, reducing both voice call quality and the speed of standard TCP applications.

Quality of Service (QoS)

To solve this and preserve the integrity of voice calls requires two different classes of service. Most networks use QoS technologies to protect and prioritize VoIP traffic by tagging it at the device level with a queue marker (a Differentiated Service Code Point or DSCP) and setting parameters for how devices in their network treat it.

It’s usually allowed top priority in being forwarded through the device, as well as some type of rate limit to ensure data applications continue to perform at the levels users expect.

Engineers can also analyze management information to help them adjust the interaction of VoIP with the infrastructure.

They can identify issues such as a lack of bandwidth in specific sectors, determine whether applications such as file sharing or streaming media are impeding VoIP performance, and how traffic should be shaped to prioritize the most business critical applications.

Thanks to TechRadar for the article.

Ixia Brings Security, Availability, and Performance to the InteropNet

Ixia’s Net Optics solutions are helping to power the world’s largest private cloud for the 16th year running

Spyke, Flex Taps, and xStream 10 support Interop’s network monitoring with total visibility

The InteropNet is a temporary cloud that is also the world’s largest and most sophisticated private, converged network. Interop attendees rely on the InteropNet to help incubate partnerships, seed new resources, and attract venture capital. Since exhibitors are debuting their most promising advances here, the security and availability of the InteropNet must be consistently flawless.

The InteropNet, which deploys only during the show itself, uses the latest offerings of the world’s most innovative vendors to showcase new capabilities in design, deployment, and multi-vendor management.

The NOC is the foundation and nerve center of the InteropNet. Ixia has participated in the InteropNet NOC from the company’s earliest years. This year, Ixia contributed three major solutions to meet the InteropNet’s most demanding challenges yet.

“Spyke delivers visibility as part of the architecture. We don’t merely look at traffic to copy it, but to actually analyze how that data is being used,” said an Interop network engineer.

Spyke enables superior troubleshooting, analysis, and issue resolution

Spyke™ helps InteropNet provide a seamless user experience by enabling security and availability. Total monitoring visibility into all InteropNet traffic allows the NOC team to monitor critical data and protect network health as well as troubleshoot.

Spyke keeps any network issues from “cascading” by isolating concerns instantly, fully supporting Interop’s goal of encouraging the free exchange of ideas. Spyke is ideal for delivering visibility under pressure to find, analyze, and resolve issues. It presents metadata for true application intelligence, monitoring applications and networks ranging from small branch sites to 10GE enterprise- wide systems. Plug-and-play simplicity supports cost-effectiveness as well.

With visibility ranging from dashboard down to the packet level, Spyke enables problem detection and isolation of trouble in any application. Spyke reveals both high-level and granular data: if response time is slow, users can drill down from high-level KPIs to diagnose what is running on the network.

Beyond issuing alerts, Spyke reveals root causes and real-time information. It dives deep into packet payloads to identify an application and extract relevant performance indicators. Spyke’s DPI engine adds coverage for hundreds of applications, so that customers can actually examine and extract meta-data for proactive troubleshooting and problem resolution. A company knows exactly which protocols and services are running in the networks, and Spyke’s intuitive dashboard displays this information in an easily understandable format.

“We chose Spyke to provide the visibility and insight we need,” said an InteropNet engineer. “Spyke gives us granular information about how the network and its individual components are working.”

At a major industry event like Interop, no latency is permitted. With some performance management solutions, the team has to wait as long as five minutes for information—an eternity! Spyke’s ability to support gigabit speeds lets it deliver information in 15 seconds or less, making it ideal for the InteropNet.

Single mode and multimode Flex Taps support all split ratios and network interfaces at full line rate for 1/10/40/100GE

The scalable, versatile Flex Tap™, another component of the InteropNet, enables any connecting device to deliver total visibility of full duplex traffic as if it were online, instantly identifying Layer 1 and Layer 2 errors.

Another advantage of the Flex Tap for the InteropNet is that the tap deploys easily and occupies an extremely compact footprint, fitting up to 24 taps in a single 1U panel and supporting 1/10/40/100GE speeds—a leap forward in value and cost savings. This breakthrough density delivers top value for the “real estate” it occupies and elevates productivity.

Flex Tap lives up to its name, offering many design options

The Flex Tap gives InteropNet designers a broad spectrum of design options, including a “mix and match” combination of speeds and connectors within a single deployment. The Flex Tap accommodates all fiber variations, whether multimode or single mode, as well as offering a simple upgrade path.

Breakthrough technology allows the Flex Tap to resolve system-wide signal loss issues that arise as network volumes increase and optical networks migrate to 10, 40, and 100GE. At high speeds, even the smallest imperfections in optical couplers, cables, or access taps can introduce enough modal distortion to quickly make bit error rates unacceptable. However, the Flex Tap’s patented design eliminates the risk of signal loss. The taps “sit on” the Internet feed, and send tapped traffic to xStream 10, which then maps traffic to Spyke and to other analysis tools. Spyke makes all traffic types, amounts, and destinations visible.

xStream 10 aggregates and regenerates traffic before switching it into a single stream

In combination with xStream 10, the entire Spyke solution is extremely easy to install and makes examining all traffic very easy.

Ixia’s Net Optics solution delivers tapping, aggregation, and monitoring from a single company. Nobody else can offer this comprehensive, inclusive solution.

The NOC team sees all traffic coming in from the show attendees and booths—and everywhere the traffic goes as well, whether website, email, and so forth. They also see instantly how much traffic the apps use.

Ixia products help provision the world’s largest private network


Flex Taps receive traffic from the Internet; traffic maps through xStream 10, and the IT team then sends it to the Spyke solution in one or two feeds. Now they are able to see all traffic coming from the show itself, including that from attendees and booths. Additional aggregation and monitoring are performed by the products of other companies participating in the InteropNet.

Thanks to Ixia for the article.

IQ Services IVR Testing Helps Ensure Successful Contact Center Upgrade for Financing Company

A premium insurance financing company based in the United States decided to upgrade their interactive voice response (IVR) platforms in call centers located in Philadelphia, PA, and Irvine, CA. Their project team leader was given responsibility for ensuring all new and existing contact center components worked together at go-live. The team leader knew the only way to accomplish this was to test each system’s ability to handle real telephone calls that “behaved” just like real callers.

The company’s project leader contacted IQ Services located in Minneapolis, MN to discuss their project objectives and to learn more about how IQ Services helps businesses ensure integrated technologies work before letting real customers use new, upgraded or repaired contact center and communications solutions.

IQ Services worked with the team leader to set up IVR testing — including IVR performance and IVR load testing methods — to send real, automated telephone calls into the contact centers to exercise the capabilities of the IVR systems under load. It was necessary for the systems – with applications written by premier systems integrator based in Boulder, CO – to successfully integrate with Avaya switches and an IBM backend host. The systems had to provide customers with convenient access to their account information on the very first day of production. While some test calls exercised the integrated switch, IVR and backend systems, other test calls were configured to transfer to customer service representatives (CSR) where information collected by the IVR systems was whispered to the CSR. The IVR load testing services verified the accuracy of this exchange in addition to the performance of the rest of the integration.

IQ Services conducted the IVR testing and provided the project team leader with detailed reports to highlight potential problem areas. Recordings of calls that experienced unexpected results were a critical element of the reports. Armed with this information, the project team was able to make necessary changes and conduct a second round of IVR testing to verify the problems identified in the first round were subsequently fixed. The majority of issues identified through IVR testing involved components other than the IVR systems and applications. The information and insight provided by IQ Services helped pinpoint the source of problems. The team leader was able to supply her management team with proof that the systems were ready to start handling real customer calls. When the systems were put into production, the entire team enjoyed the peace of mind that comes from knowing everything is going to work as expected.

Ultimately, this company was able to avoid many of the common pitfalls associated with complex contact center implementations by planning ahead and implementing StressTest™ services — in particular IVR testing — from IQ Services. The vendors involved in the project enjoyed the benefit of not being caught up in finger pointing because the project team was able to isolate problems with the detailed information provided by IQ Services. This empowered the vendors to see how their pieces of the application would work before the system went live, allowing them to be proactive in fixing any potential problems. The collaborative and cooperative environment associated with StressTest™ performance and load testing made it easy for everyone to focus on making the systems work and delivering the best possible customer experience with the new technologies.

Thanks to IQ Services for the article.

Amazingly Scalable Network Visibility with the NTO 7300

Over the past year I have had the pleasure of speaking to dozens of large enterprise and service provider customers who have been experiencing rapid increases in bandwidth, application complexity, overall scale, and the management challenges that come with them. While each network carries its own unique characteristics and challenges, through those conversations I learned that they are no longer just trying to get the “plumbing” to work, but are trying to ensure application viability, customer experience, and a competitive edge to their businesses in the rapidly changing landscape of networking. These challenges, in short, are much more complex than those faced in the early days of network plumbing.

NTO-7300To help customers manage these network challenges and deliver value to the business, Ixia is releasing the NTO 7300, a key component of the Visibility Architecture. The NTO 7300 chassis provides industry-leading density, scalability, and unified visibility that gives IT the ability to leapfrog their competition and manage rapid change. No matter what tools you are using to ensure network performance and security, the NTO 7300 helps you maximize those investments.

Now, at this point you may be looking at the above photo and thinking “Wow, that’s a beautiful piece of hardware,” – and you would be right. But you may also be thinking “Great, does the world need another big chassis switch?” Ah, I’m glad you asked! The NTO 7300 is a different kind of monitoring switch (aka, a Network Packet Broker) and here are a few ways that Ixia worked to deliver the best tool in the industry for network visibility:

  • Fantastic density. The 7300 provides over 3.8Tb of backplane capacity in just 7U, 3x the density available before today.
  • Seamless scalability. With up to 384 port of 10GE or 96 port of 40GE at full wire speed, no matter the configuration, the 7300 solves scalability challenges in even the largest networks.
  • Amazing reliability. The 7300 offers redundancy in all key components and is built around Ixia’s field-proven Network Visibility Operating System (NVOS).

To learn more, please visit the NTO 7300 product page or contact us to see a demo!

Additional Resources:

NTO 7300

Visibility Architecture

Thanks to Ixia for the article.

Comparing the use of Taps and Span Ports

What is a Tap?

Test Access Ports or Taps are primarily used to optimize ITs ability to easily and passively monitor a network link. They are normally placed between any two network devices, including switches, routers, and firewalls to provide network and security personnel a connection for monitoring devices. Protocol analyzers, RMON probes and intrusion detection and prevention systems can now be easily connected to and removed from the network when needed. By using a Tap, you also eliminate the need to schedule downtime to run cabling directly to the monitoring device from network devices, thus saving time and eliminating possible cabling issues.

network-tap-technologyAny monitoring device connected to a Tap receives the same traffic as if it were in-line, including all errors. This is achieved as the Tap duplicates all traffic on the link and forwards this to the monitoring port/s. Taps do not introduce delay, or alter the content or structure of the data. They also fail open so that traffic continues to flow between network devices in the event a monitoring device is removed or power to the device is lost.

Taps VS Span Ports

In contrast, the use of Span ports to monitor the network requires an engineer to configure the switch or switches. Switches also introduce mechanisms on ingress ports to eliminate corrupt packets or packets that are below a minimum size. The problem with this is that the monitoring device normally captures data within the egress segment.

In addition, switches may drop layer 1 and select layer 2 errors depending on what has been deemed as high priority. On the other hand, a Tap passes all data on a link, capturing everything needed to properly troubleshoot common physical layer problems, including bad frames that can be caused by a faulty NIC.

Real-time Accessibility

Taps are designed to pass through full duplex traffic at line rate non-blocking speeds. In contrast, the software architecture of low-end switches may introduce delay while packets are copied to the Span ports. As well, data being aggregated from 10/100 Mb ports to a gigabit port may also introduce signal delay.

Furthermore, accessing full-duplex traffic may also be constrained by using a Span port. For example, to capture the traffic from a 100 Mb link, a Span port would need 200 Mb of capacity. This simple oversight can cause problems, so a gigabit link is often required as a dedicated Span port.

It is also a common practice for network engineers to span VLANs across gigabit ports. In addition to the need for additional ports that may be available in one switch, it is often difficult to “combine” or match packets to a particular originating link. So while spanning a VLAN can be a great way to get an overall feel for network issues, pinpointing the source of actual problems may be difficult.

Some switches may have a problem processing normal network traffic depending on loads. Add the fact that the switch will also need to make decisions on what traffic to copy to a Span port and you may introduce performance issues for all traffic. Taps provide permanent and passive, zero delay alternatives.

Advantage Taps

Lastly, the use of Taps optimizes both network and personnel resources. Monitoring devices can be easily deployed when and where needed, and engineers do not need to re-cable a network link to monitor traffic or re-configure switches. The example in figure 1 illustrates a typical Tap deployment for one monitoring device. In contrast, a Tap that includes two monitoring ports eliminates the need for both the network and security teams to share the one Span port that may have been configured to capture traffic for monitoring devices. A regeneration Tap can simultaneously capture data from one link for four monitoring devices and aggregation Taps can simultaneously capture from multiple links to one monitoring device.

Thanks to Net Optics for the article.

Telus proposes Mobilicity takeover for third time

Financially stricken Canadian cellco Mobilicity has announced that larger national rival Telus has agreed terms to acquire the company for CAD350 million (USD317.56 million) in a transaction to be implemented under the Companies’ Creditors Arrangement Act (CCAA). Mobilicity has been operating under CCAA since September 2013, with the completion of a takeover transaction as its key initiative. The sales process has been supervised by court-appointed monitor Ernst & Young, and the press release said the takeover ‘provides for a complete continuation of Mobilicity’s business for the benefit of its stakeholders’, while ‘the vast majority of Mobilicity’s 165,000 active subscribers will be able to seamlessly migrate onto TELUS’ advanced HSPA network after the transition,’ and there are ‘no foreseen changes to employee staffing levels as a result of the proposed transaction.’ The release added that ‘approximately 95% of the holders of Mobilicity’s 15% senior unsecured debentures due 2018 support the transaction’, which remains subject to approval by the Ontario Superior Court of Justice, the Competition Bureau and Industry Canada as well as Mobilicity’s debtholders. The green-light is by no means guaranteed, as Industry Canada has twice blocked the deal under its spectrum transfer policy aiming to restrict acquisition of new entrants’ 3G/4G frequencies by national incumbents.

On 23 April 2014 Mobilicity will ask the court for an extension of the current Stay of Proceedings from 30 April until 30 June, while on 30 April its creditors will meet to vote on the latest proposed deal with Telus.

Thanks to TeleGeography for the article.

Enhancing Virtual and Cloud Visibility with Phantom vTap

Net Optics Ixia Phantom Virtualization Tap

Ixia’s Technical Marketing Engineer David Pham discussed how Ixia’s new Phantom vTap helps cloud and virtual deployments maintain high visibility.

Security and Performance Monitoring tools in the market today are not capable of providing a comprehensive, raw view of traffic traversing virtual switches. This is because they cannot monitor the internal networking layer within the hypervisor. Phantom vTap deploys a module that resides in the hypervisor kernel, passively monitoring all the inter-VM traffic and capturing only traffic of interest – enabling the customer to forward the packets to any end-point tool of choice, physical, or virtual.

You need to be seeing about 80% of the traffic on your network to be sure you know what is happening, where, and why. With the growing amount of east-west traffic, only about 20% of network traffic is truly being monitored. This is unacceptable if your goal is to have network visibility and security. Ixia’s Phantom vTaps help rebalance your visibility percentages.

You can find out more about Phantom vTaps here.

Thanks to Ixia for the article.

Monitoring Strategies for a 10 Gb World

Network MonitoringAccording to the IDC, nearly 5 million 10 Gb ports were shipped in the third quarter of 2013, surpassing the number of Gigabit switches for the first time since the technology was introduced in 2001. That’s a lot of upgrades, but with Big Data analysis, VoIP, video, and other bandwidth-intensive applications gobbling up more and more network resources, it’s no surprise.

The real challenge for many network teams today is managing performance monitoring expectations in the new 10 Gb environment, as some of the old standbys will have to be put out to pasture, or significantly modified to keep up with the higher network speeds.

“Wireshark, while you can’t argue its price point, its inability to scale and perform in 10 Gb environments becomes apparent when packet captures get too sizeable,” says Sam Wang, Regional Sales Director and Packet Analyst Extraordinaire. “Whenever we’re talking over a gigabyte in terms of the capture size, Wireshark is either very slow or it crashes completely.”

But the network protocol analyzer still has its uses, as Wang is also quick to point out. “Wireshark is very usable in the Gigabit world, but when you go to 10 Gb, the utilization, throughput, and the amount of raw data is ten times faster. It becomes an issue for a software-only product like Wireshark to keep up. You have to have some sort of built-in 10 Gb performance product [like Network Instruments GigaStor] to capture the information. You can still export the data to Wireshark, which is a common practice. But you can’t just have Wireshark on a laptop anymore, or on a desktop or on a server trying to monitor 10 Gb.”

Another concern for network administrators is what to do with all the old Gigabit tools. The fastest and easiest, but by no means cheapest method is to recycle them and purchase new 10 Gb tools. But is it necessary?

“The other thing that is being used a lot when going from Gigabit to 10 Gb, especially if they don’t have the budget to immediately go to 10 Gb tools, is they can put in a product like the Matrix network monitoring switch,” Wang says. “Assuming your 10 Gb throughput is low and you do the math to avoid oversubscribing the one Gigabit tool, you can use Matrix as a middleman between your 10 Gb network and your legacy Gigabit performance devices.”

Common scenarios see 10 Gb running around 5 Gb per second which is still arguably 2-4 times the speed of Gigabit throughput. But compare that level of activity to the fact that a fully saturated, full-duplex 10 Gb link can run at 18 or 19 Gb. That’s a billion packets per second.

How do you monitor a billion packets per second?

“That’s very high utilization,” says Wang. “Most enterprises don’t have that sort of throughput. Most have around 5 Gigabits per second in their 10 Gb links, which is still hundreds of millions of packets per second traversing the link we’re trying to monitor. It’s important to understand the amount of raw data that is being passed and the ability to capture that raw data without dropping any packets as well as maintaining adequate storage for the amount of time you want to be collecting this data. It’s much more of a challenge than Gigabit and the need to have a performance-oriented appliance front-ending the 10 Gb network becomes paramount to getting the necessary information.”

Another concern is multiple data centers. Wang says, “People will want to consider the implementation of multiple data centers for things like redundancy, load sharing, and load balancing. You’ll want to be cognizant about where the data can travel in today’s enterprise environment, and consider a unified performance platform that has a good workflow. If the data were to traverse from one network data center to another there needs to be an effective way to get to the information based on different physical locations.”

“The criteria for evaluating 10 Gb network performance tools is the ability for the solution to be scalable and also be able to keep up with the current and future throughput of 10 Gb without having issues like dropping packets, or having slow performance, or not having enough physical storage to go back as far as people want to go,” says Wang.

Thanks to Network Instruments for the article.

7 Steps to Multi-Tiered App Success

Network Instruments

Multi-tiered applications are no longer the exception but the rule. In the past, assessing application performance meant monitoring response time and health on a single server hosting one application. Now, with the applications increasingly becoming virtualized, utilizing multiple protocols, and operating over multiple servers, the approach to tracking overall application performance needs a reboot.

So how do you effectively track the health and conversations involved in a service comprised of multiple frontend web servers interfacing with middleware servers and backend database systems? Here are 7 steps for a monitoring strategy that ensures visibility and analysis into tiered applications.

1. Map out how data flows between the different application tiers.

One of the key things is being able to identify what conversations are occurring between different tiers within the application to fulfill a user’s request. For example, when a user is signing up for an online service and presses the submit button on a web form, what happens behind the scenes? Likely an HTTP web request has been issued from a client to a web server. The web server then sends that request to the middle-tier server, which converts that request from HTML to SQL so that the database server is able to interpret what the request is. When the request is processed successfully, the result is communicated back through the same set of components, and a confirmation message appears on the client’s screen.

2. Identify devices involved in sending and receiving client/server requests and responses.

For components involved in the delivery of the service, track the conversations between the devices including the ports used for communications. With large or legacy systems, manually mapping these relationships can be very time-consuming. Monitoring solutions like Observer Reporting Server streamline this process through application dependency mapping which automatically discovers and diagrams devices involved in multi-tiered applications based on how they communicate with each other.

In addition to the routers and servers, it’s critical to identify other components in the communications path, such as firewalls, load balancers, or proxy servers that can impact application performance. Having this map, you can then locate the points to visibility that will allow you to best assess application performance. For example, to assess the potential impact of a firewall on an application, capture and correlate traffic on both sides of the device.

3. Understand application-specific metrics.

Tracking performance across a multi-tiered application involves more than monitoring response times. An application can respond quickly, but be returning error codes. For example, with your web servers, are they returning 200 OK messages or 500 Internal Server Errors? Tracking and understanding specific errors will allow you to find points of application failure quicker.

4. Baseline to determine normal application performance.

Specific components and metrics to track in a multi-tiered application include:

  • Application performance and response times
  • Network delay
  • Conversations between tiers (examples: track response times and network delay from client to web servers, web tier to middleware tier, and middleware tier to database tier)
  • Traffic and usage patterns (understand how demand changes based on time of day, week, and month)

As a rule of thumb, if users are content with current performance, these metrics can serve as benchmarks for application health.

5. Set up alert thresholds to indicate degrading performance.

Thresholds can either be dynamic and based on past performance, or fixed if you have service level agreements (SLAs) to meet. Examples of thresholds and alerts to set include tracking significant network delay, slow application performance, excessive application errors.

6. Configure reports and real-time performance indicators.

Consider how you want to organize the data for effective monitoring and share with other IT teams. Here are key questions to consider when configuring reports:

  • Do you need to organize the data by client location?
  • Do you have multiple remote locations requiring their own reports?
  • Do you need to track performance by business unit or department?
  • Are there reports and indicators being used by other IT teams that require access to specific errors and metrics?
  • Is it better to view multi-tiered application performance as a map or grid?

7. Track long-term application changes.

As application usage grows, it’s critical to understand when additional devices will be added to handle increased application traffic. Stay on top of whether portions of the multi-tiered application are being virtualized. Baselines, reports, and alerts all need to be actively updated to account for these changes.

Through effective mapping, monitoring, and reporting of the many moving parts within a multi-tiered application, you’ll be able to ensure successful performance now and in the future.

Thanks to Network Instruments for the article.

Shaw notches up 2% revenue increase

Logo_TelnetCanadian cableco and satellite TV operator Shaw Communications has reported a 22% year-on-year increase in net profit to CAD222 million (USD203 million) in its fiscal second quarter ended 28 February 2014, on revenues which improved by 2% to CAD1.27 billion. Profits were boosted by the sale of two French-language TV channels for CAD49 million. Shaw lost a net 21,000 cable TV subscribers in the three-month period, leaving it with 1.98 million, although it added a net 13,000 cable broadband users in the quarter to give it a total internet base of 1.906 million, and its cable telephony base rose by 8,000 to 1.369 million. Revenue at Shaw’s cable division rose by 3% to CAD839 million in the three months.

Thanks to TeleGeography for the article.