Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?

Infosim® Global Webinar Day
September 24th, 2015
Why is this app so terribly slow?

How to achieve full
Application Monitoring with StableNet®

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?Join Matthias Schmid, Director of Project Management with Infosim® for a Webinar and Live Demo on “How to achieve full Application Monitoring with StableNet®”.

 

 

This Webinar will provide insight into:

  • Why you need holistic monitoring for all your company applications
  • How the technologies offered by StableNet® will help you master this challenge

Furthermore, we will provide you with an exclusive insight into how StableNet® was used to achieve full application monitoring for a global company.
Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?But wait – there is more! We are giving away three Amazon Gift Cards (value $50) on this Global Webinar Day. To join the draw, simply answer the trivia question that will be part of the questionnaire at the end of the Webinar. Good Luck! Why is this app so terribly slow?

Register today to reserve your seat in the desired time zone:
AMERICAS, Thursday, September 24th, 3:00 pm – 3:30 pm EDT (GMT-4)
EUROPE, Thursday, September 24th, 3:00 pm – 3:30 pm CEST (GMT+2)
APAC, Thursday, September 24th, 3:00 pm – 3:30 pm SGT (GMT+8)

A recording of this Webinar will be available to all who register!
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Solution Delivers Fast and Accurate Troubleshooting and Assurance in Next Generation Network Architecture

(NASDAQ: VIAV) Viavi Solutions Inc. (“Viavi”) today announced it is expanding its portfolio of software-defined network test and monitoring solutions with the new GigaStor Software Edition to manage performance and user experience in virtual and cloud environments. The new software configurations, which Viavi is demonstrating at VMworld, allow network and server teams to capture and save 250 GB or 1 TB of continuous traffic to disk for in-depth performance and forensic analysis.

“IT teams are wasting a lot of time by only tracking virtual server and resource health,” said Charles Thompson, senior director of product management, Viavi Solutions. “These teams can often miss problems associated with applications within the hypervisor with such narrow vision. With GigaStor Software engineers now have the ability to see in real time and historically how users are experiencing applications and services within the virtual environment, saving time and end-user heartache.”

Without GigaStor’s insight, engineers could spend hours replicating a network error before they can diagnose its cause. GigaStor Software captures packet-data from within the virtual switching infrastructure without needing to push data into the physical environment. It can be deployed in any virtual host for the long-term collection and saving of packet-level data, which it can decode, analyze, and display. Additionally, it provides IT teams with greater accuracy and speed in troubleshooting by having all packets available for immediate analysis.

Utilizing the GigaStor Software and appliances, network teams can monitor and analyze all virtual datacenter traffic whether within a VMware ESX host or on 10 and 40 Gigabit Ethernet links. GigaStor Software is available today for purchase, and is being demonstrated during VMworld in San Francisco at Viavi Solutions booth #2235.

Thanks to Viavi for the article.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

 

Thanks to Network Instruments for the article.

The State of the Network 2015

Speedier network pipes that carry more information faster than ever before can be a windfall for enterprises looking to derive more business value from their underlying infrastructure. But there’s a nefarious lining in there – when things speed up, they can get harder to manage.

Packets carrying crucial application information and content whizzing by in a blur can pose real visibility challenges IT pros haven’t encountered before. Network and application performance management is undergoing sweeping changes thanks to not only faster networks but also migration to the cloud. And the implication for application and performance management are huge.

For the Network Instruments State of the Network Global Study 2015, we recently surveyed more than 300 CIOs, IT directors and network engineers, to get their take on how the migration to higher capacity and faster networks has affected their day-to-day duties monitoring application performance. What we found is that even though it seems like 10 Gb only just went mainstream, the insatiable demand for fatter pipes to push more content rich services is already outstripping its ability to deliver. Nearly 1 in 4 organizations already having deployed 25 or 40 Gb with upwards of 40% planning to do so in 2015.

More interesting was the fact that 60 percent had no plans to consider these speeds. We interpret this as a clear indicator that 25 and 40 Gb are likely at best short-term solutions to addressing the ever-increasing demands of more bandwidth with 100 Gb being the end-game (until at least 400 Gb arrives!). Results certainly suggest this, with 44 percent planning to deploy 100 Gb by 2016.

Network teams are in a bind: Do they get by with their existing 10 Gb infrastructure, maximizing their current investments while waiting for 100 Gb price points to become more cost effective? If not, are 25 or 40 Gb a viable option that will serve as a stop-gap solution? It’s a difficult choice that each organization will need to consider carefully as they develop their network requirements for the next 5 to 10 years. It’s amazing to think that 10 Gb, having only reached market dominance in the data center core in the past few years will likely be completely displaced in the largest, most demanding core environments in the next 2 to 3 years.

Of course, there are other technologies that are simultaneously maturing which must also be assessed in parallel. Ongoing cloud growth is now clearly a given, with nearly 75 percent expecting to deploy private and more than half public by 2016. This will certainly complicate the process of quantifying network bandwidth (along with latency) needs to ensure services continue to satisfy users’ expectations wherever they may reside.

Likewise, the abstraction of all things related to IT infrastructure continues, with software-defined networking (SDN) rollouts expected to reach 50 percent by 2016. This too is an impressive number and speaks to the urgency of organizations as they drive to simplify network management, enable more scalability, improve agility, and reduce dependency on a single vendor.

Network Instruments State of the Network 2015

Gigantic Implications for Performance Management

All these trends have gigantic implications for performance management. How will the tools needed to validate service delivery keep up with the deluge of packets? Since packets don’t lie, having at least the option of analyzing and capturing all the traffic traversing the network means vendors’ performance management solutions will need to continue offering their customers high-speed capture and long-term storage of this critical data.

From a cloud perspective, how will effective application visibility be maintained when hosting is done outside the confines of the traditional data center? Network teams are seeking ways of achieving this goal. Server virtualization – now nearly a given with nearly 90 percent of respondents stating plans to do so by 2016 – was yesterday’s abstraction challenge. SDN will throw down a new gauntlet to maintaining monitoring visibility as the network is virtualized. Again, those responsible for network and infrastructure performance need assistance here.

So What Can Be Done? Below are several best practices for navigating this new landscape.

  • New ways of analyzing (including multivariate analytics and correlation), displaying, and reporting on infrastructure, network, and service health will need to be developed. Innovative instrumentation methods that can be deployed remotely and/or in ways that can be accomplished wherever services are currently deployed must be made available.
  • Maintaining visibility in SDN environments at the control and data planes will need to be addressed. Underlying infrastructure concerns don’t go away with virtualization and in fact grow as increasing loads of placed on supporting hardware—monitoring solutions must provide this as well.
  • Automating this activity as much as possible will enable faster troubleshooting while concepts like RESTful APIs will enable tighter cross-platform solution integration and facilitate IT functional collaboration. These initiatives will ease the burden on network teams, shorten time-to-resolution, and ensure optimal service delivery. Just in time too, since the SOTN findings also show the same groups responsible for these duties must also spend increasing amounts of time addressing security threats. Almost 70% of network teams are already spending up to 10 hours per week, with another 26% greater than this amount.

These are exciting but challenging times for IT performance management. Emerging technologies offer great promise for enhanced future service delivery capabilities. Likewise, the threats are considerable; maintaining operational visibility so problems are quickly resolved, achieving optimal service performance, and increasing the ability to integrate across IT functional groups and solutions.

Thanks to APM Digest for the article.

Troubleshooting Cheat Sheet: Layers 1-3

Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin? Mike Motta, NI University instructor and troubleshooting expert, places the typical user complaints into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint, Motta asks questions to better understand the symptoms and to isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

For the full Troubleshooting Cheat Sheet with extended symptom and question lists and expanded troubleshooting guidance, download Troubleshooting OSI Layers 1-3.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

“I can’t tell you how many Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector,” says Tony Fortunato, Senior Network Performance Specialist and Instructor with the Technology Firm.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Fortunado also urges teams to have essential testing equipment on hand. “Additionally in your tool box for physical issues be sure to have a cable tester for cabling problems. For other performance issues use a network analyzer or SNMP poller,” he says.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Ready for the next step? Download Troubleshooting OSI Layers 1-3 for an in-depth view of troubleshooting strategies and Cheat Sheets for navigating through the first three layers of the OSI model.

Thanks to Network Instruments for the article. 

Network Managers Lead the Charge with Observer Platform 17

The following blog post is by Steve Brown, the director of product marketing for Network Instruments, a business unit of JDSU.

The management of IT resources has changed significantly over the past several years. With the implementation of cloud, unified communications (UC), and Big Data initiatives, locating the source of application or service delivery issues has become increasingly complex. As a result, the network team often functions as a first responder, ensuring the continued and uninterrupted delivery of critical services.

In our Seventh Annual State of the Network survey earlier this year, we discovered that nearly three-quarters of IT managers cited their top application troubleshooting challenge as determining the root cause of performance problems. These problems take too much time to isolate and repair, and this downtime has a real impact on the bottom line. According to research firm Aberdeen Group, every hour of downtime costs an organization $163,674.

To effectively turn this tide, IT teams need comprehensive data at their fingertips that incorporates all operational views of the network, from systems to applications in a single view. This is something at which the Observer Performance Management Platform excels.

The Observer Performance Management Platform Version 17 delivers high-level, real-time performance views, applies advanced application analytics, reliably captures every packet, and polls every infrastructure resource for accurate resolution. It helps network teams lead the charge in ensuring service availability by:

  • Facilitating closer coordination between the IT teams
  • Providing greater web application insight

Improve IT Coordination

The latest release of the Observer Platform simplifies the process of sharing critical performance insight with other IT teams. Through new user interfaces and RESTful API’s, this powerful solution streamlines the process of creating and sharing dashboards and reports while integrating this insight into third-party tools and workflows. The end result is that problems will be fixed sooner and IT is better equipped to maintain peak service performance.

JDSU Network Instruments Observer 17 Platform

Expanded Web Application Insight

Since web-based applications have become the most common way for users to gain access to a company’s online resources, the need for detailed operational information into these services continues to grow. The Observer Platform meets this need by providing IT teams fine-grained metrics into end-user access methods such as browser type and platform, alongside status and resource usage as it relates to web applications. This provides network and application managers the detail they need to quantify user access behavior and experience to solve problems.

JDSU Network Instruments Observer 17 Platform

As the network manager is increasingly relied upon to be the first responder, the Observer Platform helps network teams lead the charge to keep applications working smoothly. The new user interface, streamlined workflows, and transaction-level capabilities of the latest release of the Observer Platform provide the integrated intelligence, ensuring that IT and network teams can successfully collaborate in the successful delivery of services. Learn more about the Observer Platform 17 release.

Thanks to JDSU for the article. 

Observer Infrastructure: Adding The Device Performance Perspective

Network ManagementIn April 2010, Network Instruments® announced the availability of Observer Infrastructure (OI), an integral element of their Observer® performance management solution, which complements their primary packet-oriented network and application performance monitoring products with an infrastructure viewpoint. An evolutionary replacement for Network Instruments Link Analyst™, OI has been significantly expanded and includes network infrastructure device health monitoring, IP SLA active testing, NBAR and WAAS data collection, and server monitoring. These new performance data sources are tightly integrated into a common console and reporting engine, Observer Reporting Server, providing quick and efficient workflows for navigating between packet and polling realms.

Issues

As network technologies have matured and architectural redundancies have improved availability, the focus of networking professionals has turned toward performance and optimization. Along with that shift comes a change in the types of issues demanding attention (away from failures and towards degradations) plus a change in scope (away from network device specifics and towards application and service awareness). Network performance management is the discipline of planning, monitoring, and managing networks to assure that they are delivering the applications and services which customers and end users consume and which underpin business processes. A high-performing network, managed in a way which is business-aware, can become a strategic asset to businesses of all sizes and purposes, and hence operations must also move from reactive firefighting of performance issues towards proactive prevention via methodical and vigilant monitoring and analysis.

Network performance management has been an active area of practice for decades. Initial efforts were focused primarily on the health and activity of each individual network device, mostly using SNMP MIBs, both proprietary and standardized, collected by periodic polling. This approach was supplemented by now-obsolete standards such as RMON for providing views into traffic statistics on an application-by-application basis. Today, additional techniques have been established for measuring various aspects of network performance and are used broadly today:

  • Synthetic agents provide samples and tests of network throughput and efficiency
  • Direct-attached probes inspect packets to track and illuminate performance across the entire stack
  • Flow records issued by network infrastructure devices record traffic details

So which one is the best? And which ones are important in order to achieve best practices for businessaware, proactive performance management? In the end, no single method meets all the needs. The best approach is to integrate multiple techniques and viewpoints into a common operational monitoring and management platform.

Building the Complete Performance Management Picture

Making the transition from reactive to proactive and from tactical to strategic in the long term requires the assembly of a complete performance management picture. And as with any journey, there are options for where to start and how to get there. Most practitioners start with a focus on the network infrastructure devices by adding basic health monitoring to their fault/event/alert regime, but find it insufficient for troubleshooting. Others will deploy packet monitoring, which provides application awareness together with network-layer details and definitive troubleshooting, but find that collecting packets everywhere is difficult to achieve. Still others will look to NetFlow to give them insights into network activity, or perhaps deploy synthetic agents to give them the 24×7 coverage for assuring critical applications or services, but these approaches have their shortcomings as well.

Where you start may not be as important as where you end up. Each measurement technique has something important to add to the operations picture:

  • SNMP and/or WMI polling gives you details about the specific health and activity within an individual network device or node – important for device-specific troubleshooting and for capacity planning.
  • SNMP can also be used to gather specific flow-oriented performance metrics from devices that offer application recognition for optimization and security, such as Cisco’s WAAS (Wide Area Application Services) solution and NBAR (Network-Based Application Recognition) features.
  • Agent-based active or synthetic testing, such as the IP SLA features resident in Cisco network devices, enables regular/systematic assessment of network responsiveness as well as application performance and VoIP quality.
  • Packet inspection, either real-time or historical/forensic, is the ultimate source of detail and truth, revealing traffic volumes and quality of delivery across all layers of the delivery stack, and is indispensible for sorting out the really tough/subtle/intermittent degradation issues.
  • NetFlow (or other similar flow record formats) provides application activity data where direct packet instrumentation is not available/practical.

Ultimately, the better integrated these data sources and types, the more powerful the solution. Integration must take place at multiple levels as well. At the presentation/analysis layer, bringing together multiple types of performance data improves visibility in terms of both breadth and depth. At the data/model layer, integration allows efficient workflows, and more intelligent (and even automated) analysis by revealing trends, dependencies, and patterns of indicators that must otherwise be reconciled manually.

Network Instruments Observer Infrastructure

Network Instruments has introduced Observer Infrastructure (OI) to extend their network and application performance management solution by tightly integrating device-based performance data at both the data and presentation layers. OI adds the infrastructure performance perspective to the packetbased and NetFlow-based capabilities of the existing Observer Solution. It also contributes IP SLA support as well as support for other complementary SNMP-gathered data sets, such as Cisco’s NBAR and WAAS features. OI goes further, delivering visibility into virtualized servers via WMI, at both the virtual machine and hypervisor levels. Another new capability is OI’s support for distributed polling and collection, allowing the infrastructure perspective to be applied across large, distributed managed environments.

Key implications of the newly enhanced Observer Infrastructure solution include:

  • Faster, more effective troubleshooting via complementary viewpoints of performance within enabling or connected infrastructure elements.
  • Better planning capabilities, allowing engineers to match capacity trends with specific details of underlying traffic contributors and drivers.
  • More proactive stance, by leveraging synthetic IP SLA tests to assess delivery quality and integrity on a sustained basis.
  • Improved scalability of the total Observer Solution, via the newly distributed OI architecture

EMA Perspective

ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) analysts strongly advocate the use of a broad and balanced approach to network and application performance management, drawing from the unique and valuable contributions of several types of performance monitoring instrumentation strategies. In parallel, data from such hybrid monitoring architectures must be integrated into presentation and workflows in a way that it can facilitate operational. efficiency and effectiveness and pave the way to proactive practices.

Network Instruments has a well-established footprint in packet-based solutions and substantial experience with infrastructure monitoring. The newly released Observer Infrastructure is an evolutionary product based on stable, long-fielded technologies which has been expanded in functionality and also has been more tightly integrated with the rest of the Network Instruments Observer solution, including presentation through Observer Reporting Server and workflow links to GigaStor™. The result is a powerful expansion to the Observer solution, both for existing customers as well as any network engineering and operations team looking for a comprehensive, holistic approach to network and application performance management.

Thanks to Network Instruments for the article.

Application Monitoring Is Not Application Performance Monitoring (APM)

Application Performance MonitoringThere is a common issue I deal with when speaking to end users trying to monitor applications. This confusion is partially created by vendors who would like to position themselves in the hot APM market, yet they clearly don’t enable performance monitoring. These vendors are slowly starting to correct the messaging, but many have poor market understanding, and continue to confuse buyers.

There are two types of monitoring technologies, one is availability monitoring and the other performance monitoring. Before embarking on a performance monitoring journey (this applied to both application performance monitoring and network performance monitoring) there should be a good foundation of availability monitoring. Availability monitoring is the easier of the two, it’s inexpensive, effective in catching major issues, and should be the staple of any monitoring initiative. We recommend unified monitoring tools (See Post : http://blogs.gartner.com/jonah-kowall/2013/11/12/unified-monitoring-note-presentation-and-client-interest/) to handle availability monitoring across technologies with a single offering.

When looking at server monitoring tools, they do more than monitor the server and OS components, but also handle the collection of data from instances of applications on the OS instances. The data collected includes metrics and often times log data which shows major issues in application availability or health. This is often what people are looking for, and many vendors call these requirements “APM”, but that’s incorrect. We call this server monitoring and/or application instance monitoring. These are availability tools and not performance monitoring tools.

The area APM tools differ from server monitoring tools is in multiple ways. APM tools live within the application and provide end user experience data from the user through the distributed application components. They are able to monitor and trace transactions across tiers of the application. Similarly other tools which monitor application performance can reside on the network, while these don’t have the level of granularity when tracing transactions and getting internals of applications they certainly can detect performance deviations of application components and often times can handle other application technologies.

Hopefully this helps clear things up, and please reply here or contact me on Twitter @jkowall

Thanks to Gartner for the article.

4 Reasons To Use APM Tools

4 Reasons To Use APM Tools

The following are four reasons why you should utilize Application Performance Management (APM) tools:

1. INCREASED REVENUE AND PROFITABILITY

APM products ensure that bottom line costs are controlled through monitoring and discovery. APM technology also allows business to maximize the efficiency per employee by reducing downtime and increasing productivity per employee, which translates into retaining customers, better quality of service and improved brand positioning amongst potential customers.

Ultimately, APM helps CIOs, CTOs, CMOs and CSOs deliver revenue growth and build the reputation of their business. APM is therefore a strategic technological driver that helps deliver profitability.

2. REDUCED DOWNTIME

Real-time monitoring is essential, as according to Forrester 67.9% of CTOs rapidly respond to the root cause of application degradation to ensure rapid reduction in MTTR.

3. GREAT CUSTOMER SERVICE AND BRAND REPUTATION

APM products ensure that customers have an excellent user experience through reduced MTTR, and quick identification of application downtime. 51.5% of management obtain performance indicator metrics and granular data to detect and eliminate impending problems. This ensures delivery of excellent customer service which translates into better word of mouth through traditional and social media.

4. REDUCED OPERATIONAL COSTS

The ability to monitor, detect, diagnose and resolve application degradation issues proactively should be the primary goal for CTOs and CIOs. In addition to breaking down silos, getting rid of war room scenarios, involving key technical teams to solve problems by proactively decreasing MTTD, involving cross departmental teams, sharing information and visibility.

When internal IT services develop an inter-departmental customer philosophy and apply it with passion, only then can 30% of random and unpredictable IT events be tackled efficiently therefore reducing the required manpower and reducing opex.

Thanks to APM Digest for the article.

Network Monitoring Tools Overview

With the rapid increase in network complexity – as well as customer dependence on enterprise network applications and services – it’s more and more important that operators understand what, how, and when traffic is traversing the network. Network monitoring is the increasingly powerful connected set of tools used in controlling, maintaining, and optimizing networks.

A network management system (NMS) is a set of devices and applications that allow a network operator to supervise the individual components of a network.

Network management system components assist with:

  • Device discovery – identifying what devices are present on a network.
  • Device monitoring – monitoring specific devices to determine their health and performance capacity.
  • Performance analysis – tracking indicators like bandwidth utilization, packet loss, latency, availability, and uptime.
  • Intelligent notifications – collecting or sending alerts that respond to set network scenarios.

Network monitoring is a cornerstone of network management, especially when troubleshooting and understanding performance of network applications and services. Network monitoring allows an operator to harness the rich information available via direct inspection and analysis of network packets.

So what is needed to integrate effective monitoring with a network management system (NMS)? The first recommendation is to add a network monitoring switch, also called a Network Packet Broker (NPB), to optimize the interconnection and flow of data to the monitoring tools that will be used. The second recommendation is to determine the type(s) of monitoring tools required. A variety of monitoring tools and technologies exist which leverage this approach, spanning network management, application performance management, as well as security management.

In general, the most prevalent tools used are:

  • Packet Analyzers. A packet analyzer (also known as a network analyzer, protocol analyzer, or packet sniffer, an Ethernet sniffer, or wireless sniffer) is a device or applications that intercepts and logs traffic passing through a network. The sniffer captures traffic and exposes the values of the packet fields. It then analyzes its content according to the appropriate RFC or other specifications.
  • Intrusion Detection / Prevention. An intrusion detection system (IDS) is a device or application that monitors the network for suspicious traffic or activity. Intrusion detection and prevention systems (IDPS) focus on identifying incidents, recording them, and reporting the detection. In addition, organizations use IDPSes to identify security policy issues, document existing threats, and deterring policy violations.
  • Data Loss Prevention. A data loss/leak prevention system is designed to detect and halt potential data breaches by monitoring, detecting, and blocking sensitive data while in-use (endpoint actions), in-motion (network traffic), and at-rest (data storage). During such incidents sensitive data is captured by unauthorized personnel, usually through covert incursions. Examples of sensitive data include private or company information, intellectual property (IP), financial or patient information, credit-card data, and other types depending on the market the industry.
  • Application Performance Management. Application performance management (APM) is the oversight of network application performance and availability. APM detects and diagnoses application performance problems in order to maintain the user’s expected level of service.

Two sets of performance metrics are closely monitored: the end user application performance experience and the stress placed on the network while running the application.

An example of the first metric would be the average response times under peak network load, where “response time” is the time required for an application to respond to user actions. Without load, most of the applications are fast enough, which is why programmers may not catch performance problems during development.

The second metric establishes whether there is adequate capacity to support the application load, while determining where there may be performance bottlenecks. Measurement of these quantities establishes an empirical performance baseline for the application.]

  • Data Recorder. A data logger (also a data recorder) records data over time. They are usually small, battery powered, and equipped with a microprocessor and internal memory for data storage. Some data loggers interface with a personal computer and use software to activate the data logger and view and analyze the collected data, while others have a local interface device (keypad, LCD) and can be used as a stand-alone device.
  • Compliance. Compliance monitoring verifies that whatever regulations and/or limitations that are in force from some governing agency (such as government institutions) are being followed and maintained.
  • VoIP / Unified Communications / Video analyzers. Multi-media analyzers assess the quality of real-time communication services like instant messaging, video conferencing, data sharing, VoIP, video playback, and speech recognition with non-real-time communication services such as unified messaging.

So what are the current trends with these common network monitoring tools? Recent studies have shown, for the most part, a solid and steady increase in the deployment of network management technologies over the last few years.[i]

Ixia Anue Net Optics Network Monitoring Solution tool use

(Note: all charts were taken from EMA’s Network Management 2012: Megatrends in Technology, Organization, and Process)

Over the years, deployment levels have risen slowly amongst the most common tools. IDS/IPS deployments remain virtually identical, but data loss prevention is a new tool that has gained widespread use. Meanwhile, data recorders, compliance monitors, and voice/video analyzers have all seen significant gains in adoption.

In conjunction with the strategy regarding the types of network management tools, the choice of what types of data are being reviewed and monitored showed interesting results.

Ixia Anue Net Optics Network Monitoring Solution stats

Overall, less data sources are being collected than previously. Back in 2008, participants selected an average of seven different types of source. In more recent studies, the average dropped to just 5.5 sources – indicating that organizations are becoming more discrete and distinct in which data sources they find to be most valuable. Or, conversely, enterprises may be increasingly overwhelmed with an abundance of data sources, and out of shear survival are only focusing on the sources that they deem most important.

Another aspect of network management on the rise is automation. The automation of network management tools and technologies offers increased efficiency, decreased errors due to manual tasks, and is a hedge against rising complexity due to growth, virtualization, and cloud services – it is a buffer against the steady increase and expansion of applied automation in selective tasks and workflows. The figure below shows the distribution of automation over various network management functions.i

Ixia Anue Net Optics Network Monitoring Solution priorities

Finally, it is relevant to see what is driving the increase in network management tool use. What are the enterprise plans, needs, and goals that are behind the use of specific network management technologies? In a recent survey, almost 50% of respondents indicated plans to invest in more network management resources.

Generally speaking, the need for improved and more comprehensive network management resources is not only growing, but exponentially expanding due to the increased complexity of networks. This complexity is driven by user demands and service expectations. The trends are only projected to continue to rise, and the investment in network management technologies will experience a related increase.

For more information on Ixia network monitoring and visibility solutions, see our website.

Additional Resources:

Ixia Monitoring Solutions

Thanks to Ixia for the article.