How does IPv6 impact VoIP?

Migrating to IPv6 requires more than flipping a switch. For the ill-prepared, migrating to IPv6 could adversely impact your VoIP systems.  The background of IPv6 is well known. Given how long OS vendors and Cisco have been ready for the change, you would think upgrading VoIP systems would be an easy thing.

The truth is not all VoIP vendors are prepared and it’s up to you to take charge. Let’s look at three critical areas needing consideration to ensure your communications systems are IPv6 ready:

  • VoIP system hardware
  • Security
  • ISP network translation

VoIP System Hardware
Begin by taking an inventory of infrastructure used to support VoIP services. With all hardware, verify IPv6 support and upgrade processes. For hardware or software phones, upgrading will likely require a software patch. Because demand for IPv6 VoIP is fairly recent, your vendor may have only begun to address this process. The second issue is one of logistics. Can your phones be remotely managed? If not, this will add substantial time to any migration.

VoIP servers are easier to upgrade. Server operating systems have long supported both dual stack and pure IPv6 options. Check with the VoIP system vendor for upgrade requirements and processes.

Security
Life behind a NAT, while not 100 percent secure, does obscure the VoIP server IPs from direct Internet exposure. VoIP systems are targeted by hackers, who once they have access to the system place calls at the owner’s expense. In moving to IPv6, the NAT device is no longer needed, which exposes your server’s IP to the outside and increases the likelihood of it coming under attack. It’s important to be aware of this threat and seek protective measures. Read more on VoIP security threats.

ISP and Network Translating
While most of the steps your ISP is taking to support IPv6 won’t have any impact upon service delivery, there are two areas to research.

  1. Does your provider offer IPv6 connections?
    You’ll want to understand the process and timeframe for requesting these connections, as it will directly impact your migration process.
  2. In transition, be aware of potential IPv4 issues.
    Providers faced with an exhausted IPv4 address pool may implement a provider-level NAT sharing one IP with multiple clients. This setup creates the potential for port collisions, which can be avoided by working closely with your ISP and understanding their policies for issuing IPs.

Because IPv6 has been spoken about as being on the horizon for so long, you would think all the concerns have been addressed. But what is still uncertain is the impact of IPv6 on VoIP systems. Only proper preparation will ensure VoIP performs smoothly through the transition. Here are resources to help you plan appropriately:

VoIP in an IPv6 world

Cisco Unified Communications and IPv6

Telnet Networks, would like to thanks Network Instruments for this article

Timestamping with Director xStream Pro

Timestamping packets has long been the key to accurate timing analysis when tuning network performance. Lately it has become especially critical in the financial sector due to the severe impact of even microseconds of latency on automated high-speed trading transactions. Since 2007, Net Optics has offered timestamping in its iTap access product line. Recently we brought the feature into our network controller line with the Director xStream Pro. The timestamp applied by Director xStream Pro uses a new, flexible, easy-to-use format that is explained in this post.

When timestamping is enabled in any of Director xStream Pro’s eight ProPorts (the top row of ports in the chassis), a 12-byte timestamp and a new CRC are appended to each packet that passes through the port. The timestamp records the precise time that the first bit of the packet arrived at the input port—this point is critical, as products that timestamp at the outgoing tool port lose accuracy due to variable delays through the device.

The timestamp format is diagrammed below.

As you can see, the first four bytes of the timestamp are a 32-bit binary value in seconds. The second four bytes are a 32-bit binary value representing tenths of microseconds; this field rolls over (returns to zero) when it reaches  0×98967F or 1 second. The final four bytes are reserved for use when higher-precision timestamping becomes available, making the timestamp format capable of supporting a resolution of 0.1 picoseconds.

Some examples of the timestamp are:
00 00 00 01   00 00 00 00   00 00 00 00 = 1 second
00 00 00 00   00 00 00 0a   00 00 00 00 = 1 microsecond
00 00 00 1b   00 96 ff   ff     00 00 00 00 = 27.9895935 seconds

The timestamp can easily be decoded by a protocol analyzer or other monitoring tool. A Wireshark capture of a timestamped packet is shown below.
Note that the packet’s original CRC is preserved. If a packet arrives with a bad CRC, the timestamp adds a good CRC on the end and forwards the packet to the monitoring tool. The tool can reference the original CRC to validate the packet. (When timestamping is NOT enabled, Director xStream Pro drops packets that arrive with bad CRCs.)

The timestamp is generated by a free-running 1 Mhz counter, providing microsecond  precision of the relative timing between packets arriving on any timestamping ProPorts in the chassis. Left by itself, the counter can drift slightly with time. To prevent drift, a pulse-per-second signal from an precision time source such as GPS can be applied to the BNC connector labeled GPS on the rear panel. Moreover, if multiple chassis are sync’d to the same time source, the timestamps will provide accurate relative timing for packets arriving at different chassis.

With the high-precision, input-port-based timestamping of Director xStream Pro, you no longer need to worry about adding a network controller switch between the traffic and your timing analysis tools. Director xStream Pro’s timestamps always provide your analyzer with precise timing of the packets on the wire, regardless of any delays introduced along the monitoring path.

4 Critical Cloud Monitoring Metrics

Critical Cloud Monitoring MetricsNo doubt most of you are dealing with externally hosted or cloud applications. To successfully address problems and maintain performance, you’ll need to stay on top of these four categories of cloud metrics.

 1. User Experience

In the world of externally hosted services, end-user experience is the only thing you can control. In measuring quality of experience, you need to think like an end user. Focus on service availability and performance metrics, and monitor response times from the user perspective. Metrics to track HTTP and specific URLs include:

  • Application and server response times
  • Network delay
  • Server requests
  • Application requests
  • Application and server availability
  • Successful transmissions
  • Client and server errors

Next, consult Service Level Agreements (SLAs) for guidance on additional metrics. Finally, place probes closer to user locations to more accurately reflect performance the users are experiencing. Learn more about probe placement.

2. Performance Benchmarks

Some best practices don’t change whether the application is hosted internally or externally. Baseline and establish internal benchmarks for normal service behavior with regards to cloud service utilization, number of concurrent users, overall cloud service response time, and response time for specific transactions. Also, incorporate any relevant SLA thresholds into baseline reports.

3. Internal Infrastructure and Network

It may be obvious, but when dealing with the cloud vendor, the internal network will always be guilty until proven innocent. Track and trend performance and availability metrics for servers, routers, switches, and other service components. For servers this includes metrics like CPU utilization, memory usage, and disk space. For routers and switches, keep tabs on port, CPU, and memory utilization. Also, record client and server response times for your internal network. Finally, be sure to monitor specific protocol transactions for issues. For one Network Instruments customer we’ll discuss later, this was the proof he needed to prove to a cloud vendor that the error was on their side.

4. Availability and Route Monitoring

Once the internal network is ruled out, how do you determine the problem location? Set up analysis tools to regularly perform an operation with the cloud service via synthetic transactions. This is more complex than a ping, and should mimic user interactions with the service. From these results, your tools or network team can determine availability and uptime. If your tools track the route, you can also pinpoint where delay might be occurring for a specific problem.

In Practice

Unlike most applications, cloud services may be managed by a department outside of IT, which can add new management complexities to performance monitoring. This was the case for a major US retailer and Network Instruments customer when the cloud vendor’s techs blamed the retailer’s network for causing service problems. The human resources department, who was in charge of managing the service, immediately turned to their network team to resolve the issue.

As their network engineer explained, “the externally-hosted program we used to verify that accounts had sufficient funds was locking up and freezing. Using pings and synthetic transactions, we were unable to get back the requested data from the site. With the GigaStor retrospective analysis appliance, we were able to verify that our data was going out, but we weren’t seeing the expected data coming back. We shared this information with the provider, and they went back and detected that they had a misconfiguration issue on their side and were responding to us on the wrong IP. Since proving this with GigaStor, things have been running problem free.”