Which Network Analyzer do you Use? Commercial or Freeware?

Networks engineers often have a mix of monitoring tools in their arsenal to manage performance. But, when should you rely on commercial analyzers versus turning to freeware tools. In this article, we’ll explore troubleshooting situations and the advantages and limitations of using each analyzer.

Consider the Problem

Deciding which analyzer to use depends largely on the nature of the problem. Does the problem involve application issues? Is the issue occurring remotely? Does the issue occur sporadically?

Application Analysis

If you’re unsure whether the issue is caused by the network or application, your commercial analysis solution is the better choice. These solutions, like Observer, provide greater application-level detail and graphical tracking of conversations traversing multiple segments, which are critical for getting to the bottom of application and delay issues quickly. In addition, they offer robust expert analysis, which automates the troubleshooting process reducing the time needed to find the cause of the problem. Conducting application analysis with freeware tools is not an efficient use of time.

“What is missing from freeware and available in commercial tools is application analysis,” said Mike Pennacchi, founder of Network Protocol Specialists. “It’s the ability to decode specific SQL calls or reassemble VoIP packets to analyze conditions, obtain quality metrics, and easily generate reports.”

Remote Capture

Often troubleshooting involves viewing what is happening from the perspective of an end-user. This means having an analyzer on someone’s remote machine, so you can run a capture and verify connectivity. If you don’t have an extra licensed copy of your commercial analyzer, using a freeware tool may be the best option due to its unrestricted licensing.

“With a freeware analyzer, I can set up a call with the end-user using Citrix GoToMeeting and gain control of their machine,” said Pennacchi. “I’ll download the freeware capture tool to their system and grab a trace file from the location. I can then either FTP or e-mail it to myself for further analysis. There are many cases where I grab a capture using freeware and bring it into a commercial analyzer for in-depth analysis.”

Filtering Flexibility

In cases involving small amounts of data and if you have a good idea of the problem, freeware programs, like Wireshark, have done a good job of making it easy to filter to these problems. These programs may not show you where the slowdown is occurring, but they can help to narrow the number of packets you need to inspect. In addition, advanced users can modify the code or API to define specific types of filtering and analysis.

Long-Term and High Speed Capture

If you’re using monitoring tools daily to manage network and application performance, you likely use long-term packet capture solutions, like the GigaStor, to store terabytes of packets. In this case, commercially available tools are really the only way to go. If you’re looking at gigabytes worth of data, you need indexing and search capabilities to quickly locate the relevant packets.

Also, when we’re talking about high-speed capture, you need appliance-based solutions capable of not only capturing packets but saving to disk at the rates of your network. “The free tools work well for capturing at a workstation or remote office,” said Pennacchi. “But, in high-speed environments we start to lose packets, which means we aren’t going to get a good picture of what’s really causing the problem.”

Ultimately, selecting an analyzer should be based on choosing the solution that is going to solve the problem in the shortest amount of time. In many cases, the answer may be both depending upon the specific situation you face. Here is a table that simplifies the decision process outlining the advantages of each solution.

Commercial Analyzers

Freeware Analyzers

  • Provide in-depth application analysis for resolving problems beyond the network
  • Offer greater support of analysis on high-speed networks without dropped packets
  • Only option for long-term packet capture
  • Streamline pinpointing delay through graphical display of conversations and multihop analysis
  • Flexible licensing ideal for remote capture
  • Good for locating known issues with limited packet analysis
  • Use APIs to define monitoring and filtering variables

10 Gb Monitoring: Learning from a Fortune 500 Company

Upgrading from Gigabit to 10 Gb

Is a current initiative for many IT teams. But to complete this task, teams continue to rely on older monitoring tools and methods for managing performance on their new higher-speed links.  In this article, we’ll look at a Fortune 500 retailer’s migration to 10 Gb and critical adjustments they made to their performance management strategies and tools to ensure on-time application delivery in the face of higher speeds.

When shifting to 10 Gb, the retailer’s network team faced challenges in the following areas:

  • Accessing traffic
  • Monitoring at 10 Gb speeds
  • Understanding overall performance

1) Accessing Traffic

Problem: The primary ways to provide monitoring tools access to network traffic are: port spanning, aggregation switches, and TAPs. While the retailer relied on spanning to access their gigabit network, they had fewer ports in their 10 Gb environment. Spanning on 10 Gb networks, also meant they faced a greater chance of dropped packets and that error packets would be filtered out by the span port.

Solution: To overcome these issues, the retailer used a combination of TAPs and aggregation switches. TAPs ensured all packets were copied and captured by the monitoring devices. Aggregation switches allowed the network team to aggregate multiple lowly-utilized links to a single 10 Gb analysis device for more cost effective monitoring.

2) Monitoring at 10 Gb Speed

Problem: The retailer’s network team relied on a combination of open-source and commercial software analysis tools to manage gigabit networks. In their full-duplex 10 Gb environment, these tools were overwhelmed.

Solution: To manage 10 Gb performance, the retailer purchased long-term packet capture appliances that could capture and save 10 Gb traffic to disk at line rate. When troubleshooting 10 Gb links, access to the packets was essential for quick and accurate resolution.

“With their older open-source tools, events or errors would pop up but they wouldn’t have any real indication of the source of the problem.  Using the Network Instruments GigaStor™ long-term capture appliance, they could go back to the packets after the incident, and ascertain in detail why the problem occurred.”

3) Understanding Overall Performance

Problem: The retailer had previously been troubleshooting and managing performance within its three largest data centers independent of each other. Without an aggregated view of performance between the centers, it was difficult to assess the scale of problems, and prioritize troubleshooting efforts. The IT team also lacked specific details about critical applications, making it hard to isolate the source of problems to the network or application. Within 10 Gb environments tracking applications was a nightmare without the in-depth analytics.

Solution: The network team implemented a high-level reporting solution to aggregate performance between the three critical data centers. With a view of overall performance, they could immediately assess the scope and impact of problems. In addition, they used baselines to establish benchmarks for key applications, and set alarms to alert the team to significant deviations in performance. Having an analysis platform capable of providing application transaction analysis, they could also track specific application metrics and error details to isolate issues within the application.


Rather than relying upon a reactive approach for troubleshooting, the retailer developed a proactive monitoring strategy while implementing 10 Gb. The new analysis platform could keep pace with the higher network speeds and allowed the network team to be proactive in managing performance