Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?

Infosim® Global Webinar Day
September 24th, 2015
Why is this app so terribly slow?

How to achieve full
Application Monitoring with StableNet®

Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?Join Matthias Schmid, Director of Project Management with Infosim® for a Webinar and Live Demo on “How to achieve full Application Monitoring with StableNet®”.

 

 

This Webinar will provide insight into:

  • Why you need holistic monitoring for all your company applications
  • How the technologies offered by StableNet® will help you master this challenge

Furthermore, we will provide you with an exclusive insight into how StableNet® was used to achieve full application monitoring for a global company.
Infosim® Global Webinar Day September 24th, 2015 Why is this App So Terribly Slow?But wait – there is more! We are giving away three Amazon Gift Cards (value $50) on this Global Webinar Day. To join the draw, simply answer the trivia question that will be part of the questionnaire at the end of the Webinar. Good Luck! Why is this app so terribly slow?

Register today to reserve your seat in the desired time zone:
AMERICAS, Thursday, September 24th, 3:00 pm – 3:30 pm EDT (GMT-4)
EUROPE, Thursday, September 24th, 3:00 pm – 3:30 pm CEST (GMT+2)
APAC, Thursday, September 24th, 3:00 pm – 3:30 pm SGT (GMT+8)

A recording of this Webinar will be available to all who register!
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Solution Delivers Fast and Accurate Troubleshooting and Assurance in Next Generation Network Architecture

(NASDAQ: VIAV) Viavi Solutions Inc. (“Viavi”) today announced it is expanding its portfolio of software-defined network test and monitoring solutions with the new GigaStor Software Edition to manage performance and user experience in virtual and cloud environments. The new software configurations, which Viavi is demonstrating at VMworld, allow network and server teams to capture and save 250 GB or 1 TB of continuous traffic to disk for in-depth performance and forensic analysis.

“IT teams are wasting a lot of time by only tracking virtual server and resource health,” said Charles Thompson, senior director of product management, Viavi Solutions. “These teams can often miss problems associated with applications within the hypervisor with such narrow vision. With GigaStor Software engineers now have the ability to see in real time and historically how users are experiencing applications and services within the virtual environment, saving time and end-user heartache.”

Without GigaStor’s insight, engineers could spend hours replicating a network error before they can diagnose its cause. GigaStor Software captures packet-data from within the virtual switching infrastructure without needing to push data into the physical environment. It can be deployed in any virtual host for the long-term collection and saving of packet-level data, which it can decode, analyze, and display. Additionally, it provides IT teams with greater accuracy and speed in troubleshooting by having all packets available for immediate analysis.

Utilizing the GigaStor Software and appliances, network teams can monitor and analyze all virtual datacenter traffic whether within a VMware ESX host or on 10 and 40 Gigabit Ethernet links. GigaStor Software is available today for purchase, and is being demonstrated during VMworld in San Francisco at Viavi Solutions booth #2235.

Thanks to Viavi for the article.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

 

Thanks to Network Instruments for the article.

The State of the Network 2015

Speedier network pipes that carry more information faster than ever before can be a windfall for enterprises looking to derive more business value from their underlying infrastructure. But there’s a nefarious lining in there – when things speed up, they can get harder to manage.

Packets carrying crucial application information and content whizzing by in a blur can pose real visibility challenges IT pros haven’t encountered before. Network and application performance management is undergoing sweeping changes thanks to not only faster networks but also migration to the cloud. And the implication for application and performance management are huge.

For the Network Instruments State of the Network Global Study 2015, we recently surveyed more than 300 CIOs, IT directors and network engineers, to get their take on how the migration to higher capacity and faster networks has affected their day-to-day duties monitoring application performance. What we found is that even though it seems like 10 Gb only just went mainstream, the insatiable demand for fatter pipes to push more content rich services is already outstripping its ability to deliver. Nearly 1 in 4 organizations already having deployed 25 or 40 Gb with upwards of 40% planning to do so in 2015.

More interesting was the fact that 60 percent had no plans to consider these speeds. We interpret this as a clear indicator that 25 and 40 Gb are likely at best short-term solutions to addressing the ever-increasing demands of more bandwidth with 100 Gb being the end-game (until at least 400 Gb arrives!). Results certainly suggest this, with 44 percent planning to deploy 100 Gb by 2016.

Network teams are in a bind: Do they get by with their existing 10 Gb infrastructure, maximizing their current investments while waiting for 100 Gb price points to become more cost effective? If not, are 25 or 40 Gb a viable option that will serve as a stop-gap solution? It’s a difficult choice that each organization will need to consider carefully as they develop their network requirements for the next 5 to 10 years. It’s amazing to think that 10 Gb, having only reached market dominance in the data center core in the past few years will likely be completely displaced in the largest, most demanding core environments in the next 2 to 3 years.

Of course, there are other technologies that are simultaneously maturing which must also be assessed in parallel. Ongoing cloud growth is now clearly a given, with nearly 75 percent expecting to deploy private and more than half public by 2016. This will certainly complicate the process of quantifying network bandwidth (along with latency) needs to ensure services continue to satisfy users’ expectations wherever they may reside.

Likewise, the abstraction of all things related to IT infrastructure continues, with software-defined networking (SDN) rollouts expected to reach 50 percent by 2016. This too is an impressive number and speaks to the urgency of organizations as they drive to simplify network management, enable more scalability, improve agility, and reduce dependency on a single vendor.

Network Instruments State of the Network 2015

Gigantic Implications for Performance Management

All these trends have gigantic implications for performance management. How will the tools needed to validate service delivery keep up with the deluge of packets? Since packets don’t lie, having at least the option of analyzing and capturing all the traffic traversing the network means vendors’ performance management solutions will need to continue offering their customers high-speed capture and long-term storage of this critical data.

From a cloud perspective, how will effective application visibility be maintained when hosting is done outside the confines of the traditional data center? Network teams are seeking ways of achieving this goal. Server virtualization – now nearly a given with nearly 90 percent of respondents stating plans to do so by 2016 – was yesterday’s abstraction challenge. SDN will throw down a new gauntlet to maintaining monitoring visibility as the network is virtualized. Again, those responsible for network and infrastructure performance need assistance here.

So What Can Be Done? Below are several best practices for navigating this new landscape.

  • New ways of analyzing (including multivariate analytics and correlation), displaying, and reporting on infrastructure, network, and service health will need to be developed. Innovative instrumentation methods that can be deployed remotely and/or in ways that can be accomplished wherever services are currently deployed must be made available.
  • Maintaining visibility in SDN environments at the control and data planes will need to be addressed. Underlying infrastructure concerns don’t go away with virtualization and in fact grow as increasing loads of placed on supporting hardware—monitoring solutions must provide this as well.
  • Automating this activity as much as possible will enable faster troubleshooting while concepts like RESTful APIs will enable tighter cross-platform solution integration and facilitate IT functional collaboration. These initiatives will ease the burden on network teams, shorten time-to-resolution, and ensure optimal service delivery. Just in time too, since the SOTN findings also show the same groups responsible for these duties must also spend increasing amounts of time addressing security threats. Almost 70% of network teams are already spending up to 10 hours per week, with another 26% greater than this amount.

These are exciting but challenging times for IT performance management. Emerging technologies offer great promise for enhanced future service delivery capabilities. Likewise, the threats are considerable; maintaining operational visibility so problems are quickly resolved, achieving optimal service performance, and increasing the ability to integrate across IT functional groups and solutions.

Thanks to APM Digest for the article.

Troubleshooting Cheat Sheet: Layers 1-3

Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin? Mike Motta, NI University instructor and troubleshooting expert, places the typical user complaints into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint, Motta asks questions to better understand the symptoms and to isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

For the full Troubleshooting Cheat Sheet with extended symptom and question lists and expanded troubleshooting guidance, download Troubleshooting OSI Layers 1-3.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

“I can’t tell you how many Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector,” says Tony Fortunato, Senior Network Performance Specialist and Instructor with the Technology Firm.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Fortunado also urges teams to have essential testing equipment on hand. “Additionally in your tool box for physical issues be sure to have a cable tester for cabling problems. For other performance issues use a network analyzer or SNMP poller,” he says.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Ready for the next step? Download Troubleshooting OSI Layers 1-3 for an in-depth view of troubleshooting strategies and Cheat Sheets for navigating through the first three layers of the OSI model.

Thanks to Network Instruments for the article. 

Network Managers Lead the Charge with Observer Platform 17

The following blog post is by Steve Brown, the director of product marketing for Network Instruments, a business unit of JDSU.

The management of IT resources has changed significantly over the past several years. With the implementation of cloud, unified communications (UC), and Big Data initiatives, locating the source of application or service delivery issues has become increasingly complex. As a result, the network team often functions as a first responder, ensuring the continued and uninterrupted delivery of critical services.

In our Seventh Annual State of the Network survey earlier this year, we discovered that nearly three-quarters of IT managers cited their top application troubleshooting challenge as determining the root cause of performance problems. These problems take too much time to isolate and repair, and this downtime has a real impact on the bottom line. According to research firm Aberdeen Group, every hour of downtime costs an organization $163,674.

To effectively turn this tide, IT teams need comprehensive data at their fingertips that incorporates all operational views of the network, from systems to applications in a single view. This is something at which the Observer Performance Management Platform excels.

The Observer Performance Management Platform Version 17 delivers high-level, real-time performance views, applies advanced application analytics, reliably captures every packet, and polls every infrastructure resource for accurate resolution. It helps network teams lead the charge in ensuring service availability by:

  • Facilitating closer coordination between the IT teams
  • Providing greater web application insight

Improve IT Coordination

The latest release of the Observer Platform simplifies the process of sharing critical performance insight with other IT teams. Through new user interfaces and RESTful API’s, this powerful solution streamlines the process of creating and sharing dashboards and reports while integrating this insight into third-party tools and workflows. The end result is that problems will be fixed sooner and IT is better equipped to maintain peak service performance.

JDSU Network Instruments Observer 17 Platform

Expanded Web Application Insight

Since web-based applications have become the most common way for users to gain access to a company’s online resources, the need for detailed operational information into these services continues to grow. The Observer Platform meets this need by providing IT teams fine-grained metrics into end-user access methods such as browser type and platform, alongside status and resource usage as it relates to web applications. This provides network and application managers the detail they need to quantify user access behavior and experience to solve problems.

JDSU Network Instruments Observer 17 Platform

As the network manager is increasingly relied upon to be the first responder, the Observer Platform helps network teams lead the charge to keep applications working smoothly. The new user interface, streamlined workflows, and transaction-level capabilities of the latest release of the Observer Platform provide the integrated intelligence, ensuring that IT and network teams can successfully collaborate in the successful delivery of services. Learn more about the Observer Platform 17 release.

Thanks to JDSU for the article. 

Observer Infrastructure: Adding The Device Performance Perspective

Network ManagementIn April 2010, Network Instruments® announced the availability of Observer Infrastructure (OI), an integral element of their Observer® performance management solution, which complements their primary packet-oriented network and application performance monitoring products with an infrastructure viewpoint. An evolutionary replacement for Network Instruments Link Analyst™, OI has been significantly expanded and includes network infrastructure device health monitoring, IP SLA active testing, NBAR and WAAS data collection, and server monitoring. These new performance data sources are tightly integrated into a common console and reporting engine, Observer Reporting Server, providing quick and efficient workflows for navigating between packet and polling realms.

Issues

As network technologies have matured and architectural redundancies have improved availability, the focus of networking professionals has turned toward performance and optimization. Along with that shift comes a change in the types of issues demanding attention (away from failures and towards degradations) plus a change in scope (away from network device specifics and towards application and service awareness). Network performance management is the discipline of planning, monitoring, and managing networks to assure that they are delivering the applications and services which customers and end users consume and which underpin business processes. A high-performing network, managed in a way which is business-aware, can become a strategic asset to businesses of all sizes and purposes, and hence operations must also move from reactive firefighting of performance issues towards proactive prevention via methodical and vigilant monitoring and analysis.

Network performance management has been an active area of practice for decades. Initial efforts were focused primarily on the health and activity of each individual network device, mostly using SNMP MIBs, both proprietary and standardized, collected by periodic polling. This approach was supplemented by now-obsolete standards such as RMON for providing views into traffic statistics on an application-by-application basis. Today, additional techniques have been established for measuring various aspects of network performance and are used broadly today:

  • Synthetic agents provide samples and tests of network throughput and efficiency
  • Direct-attached probes inspect packets to track and illuminate performance across the entire stack
  • Flow records issued by network infrastructure devices record traffic details

So which one is the best? And which ones are important in order to achieve best practices for businessaware, proactive performance management? In the end, no single method meets all the needs. The best approach is to integrate multiple techniques and viewpoints into a common operational monitoring and management platform.

Building the Complete Performance Management Picture

Making the transition from reactive to proactive and from tactical to strategic in the long term requires the assembly of a complete performance management picture. And as with any journey, there are options for where to start and how to get there. Most practitioners start with a focus on the network infrastructure devices by adding basic health monitoring to their fault/event/alert regime, but find it insufficient for troubleshooting. Others will deploy packet monitoring, which provides application awareness together with network-layer details and definitive troubleshooting, but find that collecting packets everywhere is difficult to achieve. Still others will look to NetFlow to give them insights into network activity, or perhaps deploy synthetic agents to give them the 24×7 coverage for assuring critical applications or services, but these approaches have their shortcomings as well.

Where you start may not be as important as where you end up. Each measurement technique has something important to add to the operations picture:

  • SNMP and/or WMI polling gives you details about the specific health and activity within an individual network device or node – important for device-specific troubleshooting and for capacity planning.
  • SNMP can also be used to gather specific flow-oriented performance metrics from devices that offer application recognition for optimization and security, such as Cisco’s WAAS (Wide Area Application Services) solution and NBAR (Network-Based Application Recognition) features.
  • Agent-based active or synthetic testing, such as the IP SLA features resident in Cisco network devices, enables regular/systematic assessment of network responsiveness as well as application performance and VoIP quality.
  • Packet inspection, either real-time or historical/forensic, is the ultimate source of detail and truth, revealing traffic volumes and quality of delivery across all layers of the delivery stack, and is indispensible for sorting out the really tough/subtle/intermittent degradation issues.
  • NetFlow (or other similar flow record formats) provides application activity data where direct packet instrumentation is not available/practical.

Ultimately, the better integrated these data sources and types, the more powerful the solution. Integration must take place at multiple levels as well. At the presentation/analysis layer, bringing together multiple types of performance data improves visibility in terms of both breadth and depth. At the data/model layer, integration allows efficient workflows, and more intelligent (and even automated) analysis by revealing trends, dependencies, and patterns of indicators that must otherwise be reconciled manually.

Network Instruments Observer Infrastructure

Network Instruments has introduced Observer Infrastructure (OI) to extend their network and application performance management solution by tightly integrating device-based performance data at both the data and presentation layers. OI adds the infrastructure performance perspective to the packetbased and NetFlow-based capabilities of the existing Observer Solution. It also contributes IP SLA support as well as support for other complementary SNMP-gathered data sets, such as Cisco’s NBAR and WAAS features. OI goes further, delivering visibility into virtualized servers via WMI, at both the virtual machine and hypervisor levels. Another new capability is OI’s support for distributed polling and collection, allowing the infrastructure perspective to be applied across large, distributed managed environments.

Key implications of the newly enhanced Observer Infrastructure solution include:

  • Faster, more effective troubleshooting via complementary viewpoints of performance within enabling or connected infrastructure elements.
  • Better planning capabilities, allowing engineers to match capacity trends with specific details of underlying traffic contributors and drivers.
  • More proactive stance, by leveraging synthetic IP SLA tests to assess delivery quality and integrity on a sustained basis.
  • Improved scalability of the total Observer Solution, via the newly distributed OI architecture

EMA Perspective

ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) analysts strongly advocate the use of a broad and balanced approach to network and application performance management, drawing from the unique and valuable contributions of several types of performance monitoring instrumentation strategies. In parallel, data from such hybrid monitoring architectures must be integrated into presentation and workflows in a way that it can facilitate operational. efficiency and effectiveness and pave the way to proactive practices.

Network Instruments has a well-established footprint in packet-based solutions and substantial experience with infrastructure monitoring. The newly released Observer Infrastructure is an evolutionary product based on stable, long-fielded technologies which has been expanded in functionality and also has been more tightly integrated with the rest of the Network Instruments Observer solution, including presentation through Observer Reporting Server and workflow links to GigaStor™. The result is a powerful expansion to the Observer solution, both for existing customers as well as any network engineering and operations team looking for a comprehensive, holistic approach to network and application performance management.

Thanks to Network Instruments for the article.