When to Dedupe Packets: Trending vs. Troubleshooting

2014-mar-banner

When it comes to points of network visibility, common knowledge dictates that more is always better. And, when we’re talking about troubleshooting and identifying the point of delay, increased visibility is important. But multiple points of visibility can also lead to duplicate packets. There are times when duplicate packets help in troubleshooting, but they can also lead to incorrect analysis and forecasting. It’s critical for successful network management to understand when duplicate packets can be your friend or foe.

To determine whether you need duplicate packets, you need to understand what type of analysis is being done: network trending or troubleshooting.

Trending

Troubleshooting

How duplicates impact network analysis Duplicate packets result in data being counted multiple times, leading to skewed trending statistics, such as application performance and utilization metrics. More time is also required to process and analyze the data When correlating packets traversing multiple segments, for example via MultiHop or Server-to-Server analysis, capturing these duplicates is critical in order to pinpoint where traffic was dropped or excessive delay was experienced.

Typically, as an engineer, you want to have access to duplicate packets when necessary for troubleshooting, but you do not want those duplicate packets to skew network traffic summary statistics. So, how do you design a monitoring infrastructure that gives you the flexibility to quickly troubleshoot while ensuring accurate trending?

1) Utilize two appliances when capturing and analyzing traffic.

The first solution should be a probe appliance, such as the Gigabit or 10 Gb Probe appliances, dedicated specifically for trending. The second solution would be a retrospective network analysis solution, such as the GigaStor, devoted to capturing all of the traffic including duplicate packets. When a problem is discovered within trending, you then have access to all the packets for troubleshooting.

2) Develop a monitoring strategy that minimizes duplicates for trending.

The advantage of avoiding duplicate packets by design is that it reduces the processing that your hardware will have to perform to remove duplicates.

  1. Be selective when choosing monitoring points.

Identify the aggregation points on your network, such as when traffic enters the core or a server farm, where the traffic naturally collapses from multiple links and devices into a single primary link. This gives you maximum visibility from a single vantage point when looking at performance or trending statistics.

  1. Don’t get too carried away with SPANs or mirror ports.

Monitoring device-to-device traffic communicating on the same switch can be tricky and will produce a lot of duplicate packets, if you are not mindful of how the data flows. Identify key paths that the data takes, such as communications to and from a front-end server to a back-end server connected to the same switch.

If you monitor all the traffic to and from both devices, you will end up with duplicate traffic. In that case, choose to mirror the traffic to and from the front-end server instead of both. This will give you the conversation between the clients and the front end as well as back end conversations.

Additionally, if you SPAN a VLAN or multiple ports, this can also cause duplicates. Spanning uplink ports or using TAPs is very useful when monitoring communication between devices that are connected to different switches.

3) When capturing packets for trending, remove duplicates via hardware.

If you’re using a network monitoring switch (or network packet broker), like Matrix, verify that it has packet deduplication. This is important if you are aggregating multiple links which throws all the traffic including duplicates into a single bucket before feeding the data to the analysis device. Additionally, if you have GigaStor, you can also utilize the Gen2 capture card to perform deduplication.

By being aware of the impact of duplicates on monitoring and implementing a strategy of dedicated hardware for trending and troubleshooting, you can guarantee forecasting and monitoring accuracy while ensuring granular and speedy troubleshooting.

Thanks to Network Instruments for the article.

UC, Enterprise Collaboration Markets Predicted to Keep Growing Fast

Unified communications is on the move.

The unified communications (UC) segment is forecast to grow from $22.8 billion in 2011 to $61.9 billion by 2018, according to Transparency Market Research.

UC is viewed by businesses of all sizes as a way to cut costs and improve both productivity and collaboration. It cuts costs by using the latest Internet technologies instead of more costly solutions such as traditional corporate video conferencing solutions, and it boosts productivity by reducing the number of places that employees need to check to stay connected with colleagues. It also enables workers to easily employ the communications medium that best suits the type of information being shared, whether chat, email, call of video conference.

Further, UC boosts collaboration by giving employees more face-time and a sense of working together in the same office even when they are on the road or working from home. UC—when properly deployed—can reduce the lost collaboration that comes from the mobility revolution, a necessary corrective as the physical office loses importance.

Unsurprisingly, then, that the enterprise collaboration market also is projected to have healthy growth in the next few years. According to research group Market and Markets, the enterprise collaboration market is expected to expand from $47.30 billion in 2014 to $70.61 billion by 2019.

The enterprise collaboration market focuses on solutions that drive this crucial UC collaboration among employees as they move out of the office and more into the field.

A third segment that will benefit from the growth of both the UC and enterprise collaboration markets is UC-as-a-service (UCaaS). Combining two of the biggest trends in IT, UC and the cloud, UCaaS is generating substantial interest among enterprise customers.

It is not hard to see why, too; one of the challenges that still plagues the UC market is its relative deployment complexity. While the promise of UC is simplifying communications, as a new technology many firms are still struggling to select the right vendors and put together a solution that works.

By using UCaaS, businesses can eliminate the setup difficulties by outsourcing it to a UCaaS provider that specializes in UC and offers it as a service instead. This offloads the complexity to the provider, making adoption turnkey for businesses.

Thanks to Enterprise Communications for the article.

JDSU’s Network Instruments Named a Leader in Gartner Magic Quadrant

JDSU Network Instruments Network Performance Monitoring Diagnostics

JDSU’s Network Instruments business unit has been positioned as a Leader in the new Gartner Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD). In the Gartner, Inc. report, Network Instruments is recognized as a network performance management leader in its category for completeness of vision and ability to execute. The Gartner Magic Quadrant is considered one of the tech industry’s most influential evaluations of enterprise network solutions.

Gartner created the new NPMD-category Magic Quadrant as an emerging solution segment responding to the added network complexities driven by the impact of virtualization and the cloud on organizations. These developments have required companies, and their network managers, to more proactively and strategically manage and optimize their network performance for a better end-user experience. As the report notes, “a fast-growing subsegment, the NPMD tool market is currently estimated by Gartner to be approximately $1 billion in size.”

“Although NPMD technologies are most often used for forensic purposes once an issue has occurred, increasingly they are also used to monitor and detect performance issues,” wrote Gartner analysts Vivek Bhalla, Jonah Kowall and Colin Fletcher in, Criteria for the New Magic Quadrant for Network Performance Monitoring and Diagnostics, July 2013. “Additionally, the need for network professionals to support demanding applications (such as voice, video, collaboration and unified communications, which are particularly sensitive to latency and quality of service issues) has driven demand in capabilities of the offerings. These tools are essential when troubleshooting and monitoring quality of service.”

“We believe being named a leader by Gartner in its NPMD Magic Quadrant on the heels of our acquisition by JDSU in January, has truly underscored 2014 as a breakout year for Network Instruments,” said Douglas Smith, newly appointed vice president and general manager of JDSU Performance Management in the Network and Service Enablement business unit at JDSU. “With this momentum, we will maintain our singular focus on the customer and continue to deliver deep and rich performance management products with outstanding customer responsiveness.”

Network Instruments’ current NPMD solution is comprised of its Network Instruments Observer (v16), Observer Reporting Server (v16), GigaStor (v16) and Observer Infrastructure (v4) products.

Thanks to Network Instruments for the article.

How Ixia Is Making Network Blind Spots Visible

nv-landingpage-banner-cropped

Between data centres, clouds and physical servers, enterprises are seeing rapidly growing complex networks. With this ever-expanding infrastructure comes the issue of visibility as data slips into blind spots when the network grows, which can be a stumbling point for businesses.

Roark Pollock, Ixia VP of Marketing, Network Visibility Solutions, spoke to CBR about how Ixia is battling the blind spots with its new Visibility Architecture solution.

Visibility Architecture is being launched to ‘help organisations take back control of their data centre’ – have they lost control?

It’s not so much about losing control, but being able to see what’s going on. I think there’s a possibility that companies have blind spots in their network and so whether you call it loss of control or of visibility, at the end of the day you can’t manage what you can’t see. So if you don’t have any visibility into those blind spots in the network or the data centre it becomes very difficult to manage and worst of all to troubleshoot.

How does Visibility Architecture help to eliminate those blind spots?

What we’re trying to do is give customers the ability to harvest data from every point in the network between the applications that sit in the data centre and the users that sit on the far edge of the network. It doesn’t matter if they’re trying to tap into that data on the physical network itself, or to the virtualised part of the data centre, which has historically been a very large blind spot for many customers. We have the ability to provide them with access solutions that give them access to data in virtualised data centre environment. Also there’s the ability to maintain and deploy security tools in a resilient fashion across the network.

When businesses move their data to the cloud, can these create more blind spots if information is not properly managed?

Yes, it does. We provide the ability for customers to maintain visibility into a private cloud environment. So we can provide them with our virtual visibility framework to maintain visiblity into that private cloud environments. But it has historically been one of the bigger issues for companies to address from a performance management standpoint.

Alternatively, does the legacy of data held within physical internal servers prove to be a problem?

It can be, but the great thing about physical servers is that they have to connect to something. The data that sits in those physical servers, when it’s used for an application, it is delivered to an end user. If it’s on a physical server, as soon as that data starts to flow, it hits the physical network, which you can see into very easily with the Visibility Architecture. You can see those packages as they traverse the physical network from when they leave the server on the network, across the core to the edge of the network where the application and the data is delivered to the end user.

Some organisations are managing their data using several different solutions, often through different services across the data centres, cloud and physical servers. Can this be a hindrance?

It absolutely is. The place where customers are having the biggest problem today is they have a number of organisations using a wide variety of different tools. For the tools to work effectively they have to be able to tap into information in different parts of the network. That creates a big demand for access to the network from all these different tools. The other problem is if I have a lot of tools, I have to buy enough capacity and even if I could tap into all these different parts of the network, that’s a tremendous amount of data for tools to absorb.

What we’re trying to do is solve that capacity problem. We’re providing the ability to harvest that end-to-end data off the network and then slice it down very finely to exactly what each one of the specific tools of the organisation needs to manage the network and the applications. So you’re actually making better use of the tool that you’ve invested in.

So is an end-to-end solution the way forward when it comes to managing data?

It’s not so much about managing data, but managing the overall underlying infrastructure and the experience they’re delivering to their users. Once they have that visibility, they can start to provide on the service level agreements that they are trying to deliver to their end users. They can manage the infrastructure better and troubleshoot problems much more quickly because they have access to data that they haven’t done in the past.

Visibility Architecture is our way of actually starting to move beyond just talking about products and presenting to our customer base a set of end-to-end solutions so they can go out there and position the overall Visibility Architecture and not just a set of individual products. It’s more than just a fabric or some connected products, it’s bringing together all of the disparate products to solve their problems.

Thanks to CBR for the article.

Public Mobile shutting down CDMA in May, moving to parent Telus’ network

Small cellco Public Mobile, a subsidiary of nationwide Canadian operator Telus, has announced that it will shut down its CDMA-based network covering Montreal, Toronto and surrounding areas in May this year, and migrate all users to its parent’s GSM/W-CDMA/HSPA/LTE network. In a notice for its subscribers on its website, Public warns users that in May they will need a new phone compatible with the Telus network to continue receiving services. It is offering a discount on the handset purchase for existing customers, whilst all its tariff plans are changing in May.

Thansk to Telegeography for the article. 

3 Technology Monitoring Tips

Telnet Networks- Network MonitoringMy new report is live for Forrester clients – Predictions For 2014: Technology Monitoring. Normally I am a bit of a skeptic when it comes to predictions, especially in regards to technology, because while they are interesting to read they can cause confusion and unnecessary deliberation for a buyer/strategist if they are not in context.

So my aim for this report was to provide some concrete advice for I&O professionals in 2014 in regards to their technology monitoring (user experience, applications and infrastructure) strategy or approach.

My top level advice is that during 2014, I&O has to concentrate on monitoring business technology which serves external customers. In fact this is not just a call for I&O professionals but also the rest of the business including marketing and eBusiness professionals. Why? Well just take a look at the near weekly media reports on “computer glitches” during 2013. These glitches meant lost revenue but more seriously impacted the brand image. Technology fuels business and this means that monitoring has to be a strategic business concern.

So to avoid your company being the next computer glitch headline you should:

1. MAKE SURE THAT YOUR MONITORING SOLUTIONS COVER MOBILE AND WEB FUELED BUSINESS SERVICES

From a mobile perspective, your monitoring solutions should provide holistic insight in regards to mobile devices and applications in terms of availability and performance down to the network/carrier level.

From a web perspective, in-depth web application monitoring down to the code level is a must.

2. ENSURE THAT YOUR MONITORING APPROACH INCLUDES END USER EXPERIENCE MONITORING

Ultimately applications and infrastructure can seemingly be performing well but what really matters is the end user/customer experience.

Many solutions offer both synthetic (simulated user) and real user monitoring. You need both to ensure holistic monitoring here. Real user monitoring can help to identify unpredicted customer behavior caused by a configuration update error – such as when Delta Air Lines website incorrectly began offering ultra-low fares at the end of last year.

3. REALIZE THAT MONITORING IS NOT ONLY FOR LIVE/PRODUCTION ENVIRONMENTS

2013 was the year in which the word DevOps was etched into our brains in the IT world. Every organization was talking about it and at times it sounded like some mythical savior that could cure all IT suffering.

I understand why DevOps is important and for me it’s an important evolution for enterprise IT. At its core, DevOps is about the need for continuous, rapid delivery of modern applications. This means that development, test and pre-production environments are becoming more fluid and you should be utilizing monitoring solutions earlier in the application development lifecycle. This will help with identifying configuration issues before they hit a live environment and more importantly, before they are experienced and in some cases exploited by your revenue generating customers.

Thanks to APM Digest for the article.

StressTest™ Performance and Load Testing for Voice Communications Solutions

Contact center and communications solutions offer companies a significant opportunity to control costs and improve customer satisfaction if implemented and maintained appropriately. If the solutions are not implemented and maintained properly, the opportunities become risks and eventually customers are negatively impacted. With StressTest™ performance and load testing, you can validate the performance of your end-to-end contact center and communications solutions to assure the best possible experience for your customers and the best possible efficiencies and savings for your company.

With IQ Services’ StressTest™ performance and load testing services, you go beyond component level testing to ensure the integrated components of the solution perform as expected in the production environment. Because you test from the outside-in or customer perspective, you know what your customers will experience when they interact with your solution and you decide when and how to optimize performance. Instead of receiving anecdotal complaints or unsubstantiated claims from an irritated customer, StressTest™ services give you robust, actionable data and the chance to observe, tune and verify performance before you implement a new system, upgrade an existing system or apply a patch/fix.

StressTest™ performance and load testing services can deliver 12 to 10,000+ simultaneous test calls over the PSTN to exercise your contact center and communications solutions just like real customers. IQ Services’ patented technology and methodologies are used to verify step responses, measure response times and capture other actionable data from each StressTest™ test call so you know how your solution performs under various levels of load.

Thanks to IQ Services for the article.