Setting Up VoIP for Success

A common stumbling block in monitoring VoIP occurs when the network team misconfigures or fails to define the VoIP setup process for their monitoring system. The results can be serious when you investigate a call and it doesn’t appear, it has been misidentified, or mistakenly combined with another call. Here are five steps to ensure your VoIP monitoring solution is set up for success.

Identify call components

Depending upon the VoIP vendor and system, different components may be involved in the call. For example, in Cisco’s® systems, communications occur between phones and the call manager. Avaya® uses multiple components including the phone, control processor (C-LAN), and media processor (MedPro). Before configuration, map out all components involved in the call.

Recognize the call

The first step in configuration is to specify how the monitoring system recognizes a call has been initiated. The option you choose will depend upon the probe’s ability to view the call setup packets. Observer offers flexibility dealing with this issue. New calls can either be identified from setup packets or via RTP data packets. Calls can be closed if no packets are identified for a specified time period.

Depending on how calls are communicated and the data involved, the user also needs to specify the traffic type and whether calls occur concurrently. For instance, Avaya systems use MedPro to send RTP data to other MedPro systems and to phones. Multiple calls can be sent and received at the same time.

Network Instruments - Recognize the Call

Program call components

Components involved in the call should be identified and configured. Within Observer® under the Device IP Addresses tab, identify the device IP and type. This is critical for dictating how things are measured and displayed.

Network Instruments - VoIP Program Call Components

Monitor encrypted traffic

If SIP data is encrypted, it can be nearly impossible to identify a call. Depending on your monitoring solutions, you may have the ability to track encrypted traffic. Observer decrypts SIP traffic and provides quality metrics on the fly once it has the appropriate security certificate.

Establish caller ID

The location of caller ID info varies by system and even components. To be sure the caller is properly identified, your network analyzer needs the location of ID details. In Observer, under the Caller ID Determination tab, users can establish a hierarchical set of rules to locate the sources of ID text.

Network Instruments - VoIP Establish Caller ID

Thanks to Network Instruments for this information

Identifying Network Bandwidth Bandits

Network Instruments - Identifying Network Bandwidth Bandits

Unmask Bandwidth Bandits

The term bandwidth hog makes most network managers consider the usual suspects – from ill-timed backups to illicit P2P use. But bandwidth-stealing applications can be difficult to unmask and simply increasing network capacity won’t solve the problem. So let’s discuss three commonly overlooked strategies and scenarios to identify bandits and avoid performance interruptions.

Prioritized Video Push

Network teams assume video conferencing will consume significant bandwidth. While true, the greater impact occurs when ill-prepared network teams assign video and other UC applications the highest possible precedence settings. At first, video consumes a small amount of traffic, but as use expands and high-definition video is implemented, the situation drastically changes. With high-precedence communication programs taking up more pipe, less space is available for critical applications like email and web-based programs. The result? Noticeable performance slowdowns.

Avoiding Interruptions

  • Create baselines for performance benchmarks and user expectations of legacy applications.
  • Deploy video-conferencing applications in a pre-production environment to better assess potential impact on existing network traffic
  • Conduct a staged rollout to 25 percent and then 50 percent of the user base before deploying video network-wide
  • Grant higher precedence to other critical apps to avoid slowing down business processes

Cloud Crowd

No doubt shifting to cloud offers great scalability and availability benefits. But cloud also delivers a host of new challenges. Cloud services shift IT management priorities from the network core to the WAN/Internet connection. IT teams must understand this shift, adequately prepare the network, and adjust management styles – or risk crashing Internet links and existing web services.

Avoiding Interruptions

  • Assess and baseline existing Internet service performance and link utilization.
  • Test web service with a small user group to benchmark performance and Internet link demands.
  • Use modeling to extrapolate the impact of enterprise-wide service use and evaluate link utilization with added web service over the same time period
  • Prioritize cloud and SaaS apps, plus throttle traffic to ensure bandwidth is appropriately allocated
  • When relying heavily on cloud services, consider having multiple providers in case one has a performance issue.

App Vendors: Trust but Verify

When you deploy major applications like Enterprise Resource Management programs involving vendor-hired consultants, they typically run tests to verify proper service functionality. Consultants will generally be selective of test points and coordinate with someone on the application team without network team involvement.

As a consequence, applications will be approved without exhaustive enterprise-wide evaluations. The problem is that adverse performance in many untested locations can jeopardize overall network performance and any promised savings and benefits from the newly implemented program.

Avoiding Interruptions

  • Maximize communication between application and network teams
  • Ensure selected test sites represent overall environment.
  • Use third-party monitoring tools to validate application results.
  • Implement a lab test with 50 users in pre-deployment environments to learn the application’s impact.

Most bandwidth bandits highjack network utilization when network teams fail to take steps during the deployment phase. By fully evaluating the impact of rolling out critical services, you can confirm your network is ready for new services. You also maintain positive user experience and response times with critical legacy applications.

We thank Network Instruments for this Article

7 Network Baselining Best Practices

It’s one thing to conceptually understand baselining, but how do you go about setting performance benchmarks? Here are seven best practices to help you successfully baseline and manage performance for critical services on your network.

  1. Collect enough data for relevant baselines A common mistake is to set baselines after too little data has been collected. You’ll want at least a couple weeks of metrics to establish meaningful thresholds. 
  2. Compare similar time periods With the fluctuation of network and application activity, be sure you’re comparing similar time periods. Baselines should take into account any sudden changes in activity that can occur over weekends or at the end of the month, quarter, or year. The key is to set up an apples-to-apples comparison of activity.
  3. Consider storage needs When baselining data over a significant time period, be sure your long-term packet capture appliance has enough storage to meet your analysis needs. Determine your storage needs.
  4. Configure alarms around baselines Once you have established clear baselines for service delivery components, set alarms associated with these baselines. 
  5. Apply dynamic baselines for variable conditions Any component or application condition that varies should utilize automated thresholds based on past activity. 
  6. Manually set thresholds for fixed conditions This includes service level agreements with cloud and service providers or internal service mandates which are generally fixed. 
  7. Lock baselines to prevent drift When trending over a long period of time, slowly emerging issues can adversely skew the performance baseline. This drifting of results can mask problems. Once sound baselines are established, lock them to prevent baseline drift from negatively impacting metrics.

Thanks to Network Instruments for this article

What is IT Thinking About Today

Cloud and Bandwidth Demands Challenge IT Teams

Network Instruments 2012 Annual State of the Network Global Study Offers IT Management Insights

Network Instruments State of Network 2012

Network Instruments, just released its Fifth Annual State of the Network Global Study today. The results suggest a potential management storm as IT teams face significant monitoring challenges from multiple forms of cloud computing, as well as substantially increased bandwidth demands.

Study Highlights

  • Moving apps to the cloud: 60% anticipate half of their apps will run in the cloud within 12 months
  • Video is mainstream: 70% will implement video conferencing within a year
  • Bandwidth demand driven by video: 25% expect video will consume half of all bandwidth in 12 months
  • Chief application challenge: 83% were most challenged by identifying the problem source
  • Increased bandwidth demands: 33% expect bandwidth consumption to increase by more than 50% in next two years.

“While IT teams embrace cloud services and video conferencing as a way to increase cost savings and business flexibility, these technologies introduce new components and environments which make ensuring positive end-user experience all the more challenging,” said Brad Reinboldt, senior product manager of Network Instruments. “The reported lack of monitoring tools, quality metrics, and visibility create serious obstacles that prevent IT from effectively managing performance and jeopardize costly technology investments.”

Cloud Computing

While the number of organizations embracing cloud (60%) remains steady compared to last year’s study results, the number of implementations per organization is growing. Most notably were Software as a Service (SaaS), Infrastructure as a Service (IaaS), and private cloud deployments — which grew by 10% over the last year. On average, respondents expected one-third of their applications to be running in the cloud within 12 months.
Seventy-four percent of respondents indicated their chief concern about cloud migration was ensuring corporate data security. The number is nearly double that of last year, and may be the primary reason for slowing cloud adoption by new organizations. Other top concerns included lack of accurate end-user experience monitoring and the bandwidth impact of cloud services.
Although challenging from a monitoring and visibility perspective, one-third of organizations indicated application availability increased as a result of cloud migration.

Video Conferencing

After many false starts, enterprise video conferencing is now mainstream. Video conferencing has been implemented by 55%, with an expected 70% within a year. Nearly two-thirds of these organizations have implemented multiple deployments throughout their organization. These include standard conference rooms (75%), desktop PCs (63%), and telepresence systems (30%).

While video is clearly embraced, several cited challenges that could hinder wider adoption. Inadequate user knowledge and training was viewed as the largest concern in ensuring a positive video conference experience (53%). This was followed by difficulties allocating and monitoring bandwidth (47%), and a lack of tools to manage video performance (47%).

Further compounding these issues are the lack of standardized metrics to monitor video quality. Network professionals typically relied on a mix of metrics to assess quality, including latency (76%), packet loss (69%), and jitter (60%). Surprisingly, less than one in five use Video MOS, a metric specifically designed to determine video quality.

By the beginning of 2013, nearly one-quarter of respondents expect video to consume over half of their bandwidth.

Performance and Bandwidth Management

As applications become more complex and tiered, the ability to resolve service delivery issues grows. Eighty-three percent of respondents said the largest application troubleshooting challenge was identifying the problem source. Whereas, more than two-thirds of respondents predicted network traffic demands would increase by 25%-50% within two years.

State of the Network Global Study Background

The State of the Network Global Study has been conducted annually for five years. This year, Network Instruments engaged 163 network professionals to understand and quantify new technology adoption trends and daily IT challenges. Respondents were asked, via third-party web portal, to answer a series of questions on the impact, challenges, and benefits of cloud computing, video conferencing, and application performance management.

The results were based on responses by network engineers, IT directors, and CIOs in North America, Asia, Europe, Africa, Australia, and South America. Responses were collected from October 22, 2011 to January 3, 2012.

Network Instruments State of the Network 2012

Download the PDF

Thanks to Network Insturments for this article

Telgo Communications a New VoIP operator announces services

Telgo Communications, a division of the AKR Global Group, has announced the launch of a digital home telephony service based on voice-over-internet protocol (VoIP) to subscribers across Canada, offering unlimited calls to Canadian, US and Puerto Rican phone numbers. Telgo also offers resold high speed internet connections in the provinces of Ontario and Quebec, providing data speeds of up to 6Mbps. The VoIP service is accessible via any broadband connection, with calls transmitted over Telgo’s own network. A basic plan costs CAD14.95 (USD14.95) per month, including unlimited North American calling, while a ‘global’ package offers unlimited calls to landline numbers in over 75 countries for CAD24.95 per month.

Article by TeleGeography


Prevent Network Fires with Baselines

Network Instruments Prevent Network Fires with BaseLines As a member of the network team, your day is often consumed by rushing from one network fire to the next. What if you could anticipate fires before they fanned out of control? In this article we’ll illustrate how to baseline service activity and component health to identify hotspots on your network before they go ablaze.

Understanding Baselining

There’s no need to arbitrarily set alarms anymore. With baselines, you can set intelligent alerts customized for your environment. Most performance management solutions, like the Observer Platform, offer baselining capabilities that collect performance data on device, application, and network operations. This data determines average response time, device utilization, and applications status and errors for a given time period. It’s used to establish benchmarks and deviations from that baseline. When a deviation is crossed, your team is notified of the issue. You can also clearly assess the issue severity by seeing how far performance has deviated from the baseline.

Baselining Benefits

There are three primary reasons why you should baseline performance.

  1. Understanding good network performance

    Many teams don’t have an idea of what “normal” performance is for their environment. Relying upon application vendor guidance or someone’s rule of thumb ignores the components and conditions unique to your environment. By establishing what’s typical for your network today, you can determine what is atypical in the future.

  2. Proactively identify and triage problems

    Baselining gives you a clear perspective and quick way to determine where problems are brewing. It’s easy to look at a chart displaying the network, system, and application components of a service and identify where performance has deviated from what is acceptable. You can also triage issues based upon the severity of the deviation.

  3. Validating user complaints

    How often do users complain about email or Internet services running slow? Use baselines to compare the time of the complaint to similar time periods in the past and confirm whether any deviations have occurred.

Key Metrics to Baseline

You’ll want to baseline service performance from two different perspectives.

  1. Baseline service delivery components

    This gives you predictive analysis to assess when conditions arise that might be detrimental to overall application performance. For example, you’ll be able to spot when CPU utilization or network demand is creeping up to the point of impacting service response time

  2. Track user experience metrics

    Monitor any metrics that are seen by the end user, such as propagation delay, response times, and errors. Read more on user experience metrics.

Baselining and tracking acceptable performance for all components supporting a service means your network team can successfully identify and manage network hotspots before they flare out of control. It also improves the effectiveness of your network team so that you won’t be jumping from fire to fire.

We want to thank Network Instruments for this article

Synchronizing Time Using Presentense

PresenTense offers you an alternative to the limited functionality of synchronizing with Microsoft W32Time program. PresenTense, will not only synchronize all of the Windows PCs on the network, but also provides alert notification and a audit trail which is  not available from the Microsoft W32Time (Windows Time Service). PresenTense Client and Server software is a has Graphical User Interface (GUI) based program that can provide a primary and back-up time reference for redundancy.

Client Software PresenTense Client software synchronizes the PCs to the Time Server and/or another PC on the network that is running the Server software. If the PC can’t reach its Time reference, it can email an alert notification that it can’t be synchronized. PresenTense LAN Time Analyzer is a network time synchronization administrative tool that monitors the time accuracy of all PCs on the network. If a PC exceeds a user-defined accuracy specification, this program can run any exe-based program and can also open a message on the PC’s monitor, alerting to a PC with an error higher than expected and desired
Presentense Client Software

Presentense LAN Time Analyzer         Spectracom Presentense LAN Time Analyzer

The PresenTense NTP Auditor program provides an audit trail of the PC’s time by comparing the PC’s time to up to three different NTP Time references. This program can provide a continuous print-out for a hard-copy proof that each PC was synchronized at any given moment in time. It also logs this information in a text file sorted automatically by month and day. The time is sampled at set intervals and the error of the PC’s time compared to the reference NTP Time Servers is permanently captured. If the time of the PC is manually set by someone at any time between the scheduled samples, the program automatically triggers an unscheduled sample to permanently log how far off from UTC the PC was manually set and when the event occurred. Once the PC is resynchronized or manually set again, another unscheduled sample occurs again and the time of this occurrence is logged.

Presentense NTP Auditor

Spectracom Presentense NTP Auditor PresenTense NTP Auditor

If you would like to try Presentense for your network you can obtain a free 30 day evaluation trail by contacting us at Telnet Networks at 800-561-4019.  You can download you 30 day free trial here

Emulate and Analyze Real Time IP Traffic in the Cloud

Service Providers and network equipment manufacturers now have the ability to emulate and analyse real time IP traffic in cloud environments for multiple hypervisors right down to each individual application flow.   With the migration from dedicated network boxes to flexible, fluid cloud environments you require a test capability that can work with dedicated environments or can operate in the new virtualized or cloud environment.

To test a cloud environment you need a completely virtualized test system to test opensource hypervisors as well as fully support vendor hypervisors. Shenick diversifEyeVM (Virtual Machine) can emulate and test a  Shenick Virtual Infrastructure Test & Measurement Solutions with multi-hypervisors, giving test analysis right down to each application so that virtualized networks run smoothly especially in extreme situations.

You now have more flexibility, scaling capabilities, easier software migration to virtual machine-based test and the ability to pin-point errors across layer 2-7 on a highly granular basis.  We can provide  powerful per-flow emulation and real-time analysis in VM-to-VM scenarios, and hybrid external-to-vm applications.diversifEye VM offers many advantages to the market:

  • Works with multi-hypervisors.
  • Offers no loss of functionality in combining real and VM test blades
  • Ensures the flexibility to add as many VM and virtual interfaces (VI) as required
  • Multiple VI per virtual test blades
  • Provides an easy, software-based migration path to an all-virtualized diversifEyeVM
  • Offers a vast range of application tests covering security, voice, video and data applications, as well as secure VPN offload/tunnel, address spoofing verification, DDoS attack mitigation test etc.