Application Performance Management and the Cloud

Cloud ComputingThe lack of innovation in traditional data centers has given way to developments in the cloud. It offers flexible user models such as Pay As You Go (PAYG) and Multi Tenancy services for e.g. Amazon Web Services (AWS). The downside is that as the cloud’s capacity increases (400k registrations AWS-Quadnet 2014) it is prone to more blackouts, security and compliance risks than we are led to believe.

The IT environment has become more complex around the cloud. The continued convergence of platforms and technologies has created additional challenges like Virtualization of legacy Data Center, Cloud Hosting, Software Defined Networks (SDN), remote access, Mobility (BYOD) and additional unstructured Big Data, a part of which is consumerism and encompasses User Generated Content (UGC) such as social media (Voice/Video).

The confluence of hardware and software over layered on an existing network architecture will create architectural complications in monitoring applications and network performance and visibility blind spots such as bandwidth growth across the vertical network between VM and physical servers, security and compliance protocols for remote and cloud environments etc.

The interplay of complexity e.g. in the area of data packet loss, leaks and packet segmentation in a virtualized environment will lead to delays of more than a few seconds in software performance synchronization. This can cause brownouts (lags, latency or degradation) and blackouts (crashes) which are detrimental to any commercial environment – such as retail web UI where a 2 second delay in web page uploads (slow DNS) is far too much.

The issues in a virtualized cloud lie in the Hypervisor as it changes IP addresses for VDI’s regularly. So the real measurement issue becomes getting insight into the core virtualized server environment.

When questioned, 79% of CTOs (Information Week study 2010) cited “software as very important” and with only 32% of APM service providers actually using specialized monitoring tools for the cloud. By not gaining deep insight into PaaS (Programming as a Service) and IaaS (Infrastructure as a Service), there is no visibility into the performance of application and networks. Therefore tracking degradation, latency and hub jitter becomes like finding a needle in the proverbial infrastructure haystack.

The debate surrounding cloud visibility and transparency is yet to be resolved partly because synthetic, probes, and passive agents only provide a mid-tier picture of the cloud. A passive virtual agent can be used to gain deep insight into the virtualized cloud environment. As the cloud market becomes more competitive, suppliers are being forced to disclose IaaS/PaaS performance data. Currently 59% of CTOs hold software in the cloud (Information Week 2011) without any specialized APM solution. Therefore one can only monitor the end user experience or resource used (CPU, memory etc.) to get some idea of application/network performance through the wire.

The imperative is in ensuring that your APM provider can cope with the intertwining complexities of the network, application, infrastructure and architecture. This means that a full arsenal of active and passive measuring tools need to be deployed for a pure play APM or a full MSP (Managed Service Provider) of end to end solutions that can set up, measure and translate outsourcing and SLAs into core critical measurable metrics. Furthermore, new software/technology deployments can be compared to established benchmarks allowing business decisions – such as application or hardware upgrades – to be made on current and relevant factual information i.e. business transaction, end user experience and network/application efficacy.

The convergence, consumerism, challenges and complexities based around the cloud have increased. So have the proficiencies of the leading APM providers in dealing with cloud complexity by using agentless data, collecting mechanisms such as injecting probes into middleware or using routers or switches embedded with NetFlow data analysers. The data is used to compile reports and dashboards on packet loss, latency and hub jitter etc. The generated reports allow comparisons of trends through semantic relationship testing, correlation and multivariate analysis with automated and advanced statistical techniques allowing CTOs and CIOs to make real time business decisions that provide a competitive advantage.

Thanks to APMDigest for the article.


The State of the Network 2015

Speedier network pipes that carry more information faster than ever before can be a windfall for enterprises looking to derive more business value from their underlying infrastructure. But there’s a nefarious lining in there – when things speed up, they can get harder to manage.

Packets carrying crucial application information and content whizzing by in a blur can pose real visibility challenges IT pros haven’t encountered before. Network and application performance management is undergoing sweeping changes thanks to not only faster networks but also migration to the cloud. And the implication for application and performance management are huge.

For the Network Instruments State of the Network Global Study 2015, we recently surveyed more than 300 CIOs, IT directors and network engineers, to get their take on how the migration to higher capacity and faster networks has affected their day-to-day duties monitoring application performance. What we found is that even though it seems like 10 Gb only just went mainstream, the insatiable demand for fatter pipes to push more content rich services is already outstripping its ability to deliver. Nearly 1 in 4 organizations already having deployed 25 or 40 Gb with upwards of 40% planning to do so in 2015.

More interesting was the fact that 60 percent had no plans to consider these speeds. We interpret this as a clear indicator that 25 and 40 Gb are likely at best short-term solutions to addressing the ever-increasing demands of more bandwidth with 100 Gb being the end-game (until at least 400 Gb arrives!). Results certainly suggest this, with 44 percent planning to deploy 100 Gb by 2016.

Network teams are in a bind: Do they get by with their existing 10 Gb infrastructure, maximizing their current investments while waiting for 100 Gb price points to become more cost effective? If not, are 25 or 40 Gb a viable option that will serve as a stop-gap solution? It’s a difficult choice that each organization will need to consider carefully as they develop their network requirements for the next 5 to 10 years. It’s amazing to think that 10 Gb, having only reached market dominance in the data center core in the past few years will likely be completely displaced in the largest, most demanding core environments in the next 2 to 3 years.

Of course, there are other technologies that are simultaneously maturing which must also be assessed in parallel. Ongoing cloud growth is now clearly a given, with nearly 75 percent expecting to deploy private and more than half public by 2016. This will certainly complicate the process of quantifying network bandwidth (along with latency) needs to ensure services continue to satisfy users’ expectations wherever they may reside.

Likewise, the abstraction of all things related to IT infrastructure continues, with software-defined networking (SDN) rollouts expected to reach 50 percent by 2016. This too is an impressive number and speaks to the urgency of organizations as they drive to simplify network management, enable more scalability, improve agility, and reduce dependency on a single vendor.

Network Instruments State of the Network 2015

Gigantic Implications for Performance Management

All these trends have gigantic implications for performance management. How will the tools needed to validate service delivery keep up with the deluge of packets? Since packets don’t lie, having at least the option of analyzing and capturing all the traffic traversing the network means vendors’ performance management solutions will need to continue offering their customers high-speed capture and long-term storage of this critical data.

From a cloud perspective, how will effective application visibility be maintained when hosting is done outside the confines of the traditional data center? Network teams are seeking ways of achieving this goal. Server virtualization – now nearly a given with nearly 90 percent of respondents stating plans to do so by 2016 – was yesterday’s abstraction challenge. SDN will throw down a new gauntlet to maintaining monitoring visibility as the network is virtualized. Again, those responsible for network and infrastructure performance need assistance here.

So What Can Be Done? Below are several best practices for navigating this new landscape.

  • New ways of analyzing (including multivariate analytics and correlation), displaying, and reporting on infrastructure, network, and service health will need to be developed. Innovative instrumentation methods that can be deployed remotely and/or in ways that can be accomplished wherever services are currently deployed must be made available.
  • Maintaining visibility in SDN environments at the control and data planes will need to be addressed. Underlying infrastructure concerns don’t go away with virtualization and in fact grow as increasing loads of placed on supporting hardware—monitoring solutions must provide this as well.
  • Automating this activity as much as possible will enable faster troubleshooting while concepts like RESTful APIs will enable tighter cross-platform solution integration and facilitate IT functional collaboration. These initiatives will ease the burden on network teams, shorten time-to-resolution, and ensure optimal service delivery. Just in time too, since the SOTN findings also show the same groups responsible for these duties must also spend increasing amounts of time addressing security threats. Almost 70% of network teams are already spending up to 10 hours per week, with another 26% greater than this amount.

These are exciting but challenging times for IT performance management. Emerging technologies offer great promise for enhanced future service delivery capabilities. Likewise, the threats are considerable; maintaining operational visibility so problems are quickly resolved, achieving optimal service performance, and increasing the ability to integrate across IT functional groups and solutions.

Thanks to APM Digest for the article.

Virtual Server Rx

JDSU Network Instruments- Virtual Server Rx

The ongoing push to increase server virtualization rates is driven by its many benefits for the data center and business. A reduction in data center footprint and maintenance, along with capital and operating cost reductions are key to many organizations’ operational strategy. The ability to dynamically adjust server workloads and service delivery to achieve optimal user experience is a huge plus for IT teams working in the virtualized data center – unless something goes wrong.

With network infrastructure, you can usually north/south track the root cause back to one location via careful instrumentation of the resources. Troubleshooting is then facilitated with any number of free and commercial-ware monitoring tools.

How can you get the same visibility you need to validate service health within the virtualized data center?

First Aid for the Virtual Environment

Network teams often act as “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts can be quickly offset by sub-par app performance. Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources.

Health Checks

Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Virtual servers are often highly provisioned and operating at elevated utilization levels. Assessing their underlying health and adding additional resources when necessary, is essential for peak performance.

Use performance monitoring tools to check:

  • CPU Utilization
  • Memory Usage
  • Individual VM Instance Status

Often, these metrics can point to the root cause of service issues that may otherwise manifest themselves indirectly.

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Further Diagnostics

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user. To do so, care must be given in properly instrumenting virtualized server deployments and the supporting network infrastructure.

Ready for more? Download the free white paper 3 Steps to Server Virtualization Visibility, featuring troubleshooting diagrams, metrics, and detailed strategies to help diagnose what’s really going on in your virtual data centers. You’ll learn two methods to monitor VSwitch traffic, as well as how to further inspect perimeter and client conversations.

Download 3 Steps to Server Virtualization Visibility

Thanks to Network Instruments for the article. 

Enterprises- Ensure Application Performance and Security Resilience

Ensure Application Performance and Security ResilienceFor most every enterprise, the network is your business. Your network and applications are what connects you to your customers. Maintaining network vitality for an optimal user experience is key to business growth and profitability. But today’s networks are under tremendous pressures. User expectations for high performance and innovative applications are ever-increasing. So too are the frequency, magnitude, and sophistication of security attacks that your adversaries are launching to attempt to infiltrate your network, steal data, or disrupt operations.

To achieve a secure network that is resilient to attack requires the selection and deployment of security devices such as firewalls and intrusion prevention. To meet the expectation for application performance, devices such as load balancers, application controllers and performance monitoring tools are also deployed in the network. Ixia is focused on helping to ensure security resilience and application performance in your network.

Security Resilience

The demands on the network are constant and your security must have resilience to maintain its effectiveness as it comes under attack, is challenged to maintain visibility to traffic and events across the network, or just needs an operational change to deploy the latest threat updates. Ixia’s portfolio of security solutions allow enterprises to:

  • Optimize security device investments such as IPS, Firewall, NGFW or DDoS Mitigation by helping you select the best technology with the right performance and deploying it in the network most effectively with network visibility and optimal load balancing.
  • Minimize downtime and improve operational change control for security upgrades by validating security updates and changes and providing the inline deployment tools to ensure that these changes are not disruptive to network operations.
  • Train and prepare for realistic cyber security exercises with systems that can create the real-world application loads and attack traffic required for a cyber range and also provide the visibility required to stream high volumes of events to SOC tools to monitor the exercises.

Application Performance

It has become critical to assess applications and their performance not only before going live to ensure they are customer-ready, but that performance is maintained over time by monitoring the network — ensuring visibility into key application flows, anywhere on the network. Ixia’s portfolio of application performance solutions allow enterprises to:

Thanks to Ixia for the article. 

Aligning IT with Business via Performance Management

Much of the discussion around the Observer Platform 17 release has focused on how the designs of the new user interface (UI) and other enhancements will assist network and operations teams to more easily manage service and application performance.

This performance data and analysis isn’t just of value to IT but to the overall business. The challenge for performance management solutions has been providing this intelligence in a way that can be easily accessed and understood by other IT and business teams. The Observer Platform 17 both expands useful analysis available to business groups and makes it easier to use the data with systems familiar to these groups.

Enhancement: Expanding Web Service Analytics

  • Benefit: Strengthens visibility into how users consume company web resources, specifically as it relates to a web-based app’s device parameters like OS, mobile and desktop platform details, and browser type.
  • Business Value: Knowing not just “what” but “how” customers are accessing data is pivotal to optimizing web content and quantifying the effectiveness of customer-facing web interactions.
  • In Practice Example: For the marketing team launching web initiatives, these metrics provide details on how visitors are accessing the website, and enhance their understanding of the user experience by providing response-time and error metrics. Additionally, when network-based problems occur that impact marketing web programs, they can be resolved by the network team which has access to the packets.

JDSU Network Instruments Observer 17 Platform

Enhancement: Third-Party System Integration via RESTful APIs

  • Benefit: Simplifies sharing of performance data with other groups. RESTful APIs are a programming interface that utilizes HTTP requests like GET, PUT, POST and DELETE. Using this universal access method enables any solution to connect to the Observer Platform to access data or even manage the solution remotely.
  • Business Value: Other teams in an organization can interact and view performance data and analysis from the Observer Platform from the tools and workflows that they use on a daily basis. This allows them to proactively track performance of critical business systems, and view these metrics alongside business metrics.
  • In Practice Example: A support staff for a retail chain could integrate the Observer Platform into their helpdesk system via Apex’s RESTful API to monitor points of sale (PoS) on their network. The Observer Platform could instantly alert the service desk of an anomaly or system condition that could soon negatively impact users. The early alerts, performance analysis, and access to packets allow the staff to take proactive steps to remediate the issue before it impacts the PoS and customers.

JDSU Network Instruments Observer Apex

With IT playing a key role in helping businesses to develop competitive advantages and nimbly respond to changing markets, it’s critical that network teams can facilitate the sharing of performance intelligence. This also allows IT and business teams to evaluate the success of business operations and initiatives. The new features of the Observer Platform 17 mark a significant step forward in enabling the network team and IT to more closely align with business processes and goals.

Thanks to Network Instruments for the article. 

Ixia’s new Ebook- The Network Through a New Lens: How a Visibility Architecture Sharpens the View

“Enter the Visibility Architecture”

“Buying more tools to deal with spiraling demands is counter-productive – it’s like trying to simplify a problem by increasing complexity. Visibility merits its own architecture, capable of addressing packet access and packet stream management. A visibility architecture that collects, manages, and distributes packet streams for monitoring and analysis is ideal for cost-savings, reliability, and resilience. The economic advantages of such end to-end visibility are beyond debate.

An architectural approach to visibility allows IT to respond to the immediate and long-range demands of growth, management, access, control, and cost issues. This architecture can optimize the performance and value of tools already in place, without incurring major capital and operational costs. With the ability to see into applications, a team can drill down instantly from high-level metrics to granular details, pinpoint root causes and take action at the first—or even before the first – sign of trouble – lowering Mean Time to Repair (MTTR) dramatically.

A scalable visibility architecture provides resilience and control without adding complexity. Because lack of access is a major factor in creating blind spots, a visibility architecture provides ample access for monitoring and security tools: network taps offer reliable access points, while NPBs contribute the advanced filtering, aggregation, deduplication, and other functions that make sure these tools see only traffic of interest.

Application- and session-aware capabilities contribute higher intelligence and analytical capabilities to the architecture, while policy and element management capabilities help automate processes and integrate with existing management systems. Packet-based monitoring and analysis offers the best view into the activity, health, and performance of the infrastructure. Managing a visibility architecture requires an intuitive visual/ graphical interface that is easy to use and provides prompt feedback on operations – otherwise, architecture can become just another complexity to deal with.”

Ixia Visibility Architecture

The Ixia Network Visibility Architecture encompasses network and virtual taps, as well as inline bypass switches; inline and out-of-band NPBs; application-aware and session aware monitoring, and a management layer.

Download the ebook here

Ixia The Network Through a New Lens

Thanks to Network World for the article. 

3 Top Features of Observer Platform 17

JDSU Network Instruments Observer 17 Platform

Network Instruments has announced the newest edition of the Observer Performance Management Platform. With redesigned, easy-to-use interfaces and workflows, expanded web user visibility, increased transactional intelligence, and enhanced integration with third-party tools and applications, Observer Platform 17 offers the network team a comprehensive, intuitive approach.

What’s New

Several components of the Observer Platform have changed significantly with this release. Observer Reporting Server (ORS) is now Observer Apex and Network Instruments Management Server (NIMS) is now Observer Management Server (OMS). Both are fully integrated within the Observer Platform and provide key capabilities for a comprehensive performance management solution.

“The Observer Platform 17 release uniquely positions the Platform for the future of IT and the network manager’s evolving role as the key troubleshooter and enabler of technology adoption throughout the enterprise,” says Charles Thompson, CTO. A major focus of the new features was to bring enhanced ease-of-use to this powerful performance management toolset.

Top 3 Features

With Observer Analyzer, Observer GigaStor, Apex, OMS, Observer Matrix, Observer Probes, Observer Infrastructure, and Observer nTAPs, the Observer Platform is a fully integrated solution offering the following new features for this release.

Intuitive User Interface

The newly designed, intuitive user interface (UI) of the Observer Platform’s Apex and OMS components takes ease-of-use to the next level. Both Apex and OMS feature a user-friendly, interface with the same drag-and-drop, right click, and scrolling capabilities you’d expect from any web-based application. The result is a lower learning curve, and the ability to quickly react to network issues. The modern HTML5 UI offers visual representations of the data you seek in a number of different formats. With Apex, use pre-built dashboards and reports, or create your own with metrics that matter most to your organization. Share data with other business units in a format that is digestible and easy to understand.

Enhanced Web Service Analytics

When it comes to how effectively your web-based apps and services are running, the customer is always right. For complete service assurance, you need the fine-grained, end-user operating parameters that the Observer Platform can provide. From client device response time to browser type, and even operating system (OS), get granular views into actual systems and applications. Observer Analyzer’s HTTP traffic metrics include details on browser type and OS, as well as the ability to correlate this information with response times, errors, and more.

Third-Party Integration

A RESTful API allows Apex to provide the easy sharing and management of performance data with complementary IT initiatives like event management or service orchestration to bring all of your performance management tools together. With OMS, this functionality allows for the central authentication, authorization, and auditing of the Observer Platform.

Better Results

The new Observer Platform features allow network professionals a more productive tool to stay on top of key IT trends and challenges while:

  • Proactively pinpointing performance problems and optimizing services
  • Integrating monitoring into the deployment of IT initiatives, including cloud, service orchestration, and security
  • Easily managing access and sharing performance data with IT teams and business units
  • Quickly assessing and optimizing user experience with web services

“Utilizing the newest features in the Observer Platform, network teams are well prepared for their constantly changing role by achieving quicker root-cause analysis, understanding applications in-depth, and more easily sharing performance data with non-network teams,” says Thompson.

The Observer Platform is a full-service IT solution for optimizing application and network performance management. Each part of the system fits precisely together with all other components, increasing capabilities, power, and speed.

Thanks to Network Instruments for the article.