Network Managers Lead the Charge with Observer Platform 17

The following blog post is by Steve Brown, the director of product marketing for Network Instruments, a business unit of JDSU.

The management of IT resources has changed significantly over the past several years. With the implementation of cloud, unified communications (UC), and Big Data initiatives, locating the source of application or service delivery issues has become increasingly complex. As a result, the network team often functions as a first responder, ensuring the continued and uninterrupted delivery of critical services.

In our Seventh Annual State of the Network survey earlier this year, we discovered that nearly three-quarters of IT managers cited their top application troubleshooting challenge as determining the root cause of performance problems. These problems take too much time to isolate and repair, and this downtime has a real impact on the bottom line. According to research firm Aberdeen Group, every hour of downtime costs an organization $163,674.

To effectively turn this tide, IT teams need comprehensive data at their fingertips that incorporates all operational views of the network, from systems to applications in a single view. This is something at which the Observer Performance Management Platform excels.

The Observer Performance Management Platform Version 17 delivers high-level, real-time performance views, applies advanced application analytics, reliably captures every packet, and polls every infrastructure resource for accurate resolution. It helps network teams lead the charge in ensuring service availability by:

  • Facilitating closer coordination between the IT teams
  • Providing greater web application insight

Improve IT Coordination

The latest release of the Observer Platform simplifies the process of sharing critical performance insight with other IT teams. Through new user interfaces and RESTful API’s, this powerful solution streamlines the process of creating and sharing dashboards and reports while integrating this insight into third-party tools and workflows. The end result is that problems will be fixed sooner and IT is better equipped to maintain peak service performance.

JDSU Network Instruments Observer 17 Platform

Expanded Web Application Insight

Since web-based applications have become the most common way for users to gain access to a company’s online resources, the need for detailed operational information into these services continues to grow. The Observer Platform meets this need by providing IT teams fine-grained metrics into end-user access methods such as browser type and platform, alongside status and resource usage as it relates to web applications. This provides network and application managers the detail they need to quantify user access behavior and experience to solve problems.

JDSU Network Instruments Observer 17 Platform

As the network manager is increasingly relied upon to be the first responder, the Observer Platform helps network teams lead the charge to keep applications working smoothly. The new user interface, streamlined workflows, and transaction-level capabilities of the latest release of the Observer Platform provide the integrated intelligence, ensuring that IT and network teams can successfully collaborate in the successful delivery of services. Learn more about the Observer Platform 17 release.

Thanks to JDSU for the article. 

JDSU’s Network Instruments Unveils Observer Platform 17 for Simplified Performance Management and Application Troubleshooting

JDSU Network Instruments  Observer Platform 17

Network Instruments, a business unit of JDSU (NASDAQ: JDSU), has announced the latest edition of its cornerstone Observer® Performance Management Platform. Observer Platform 17 provides network managers and engineers a comprehensive, intuitive approach to:

  • Proactively pinpoint performance problems and optimize services.
  • Integrate monitoring into the deployment of IT initiatives, including cloud, service orchestration and security.
  • Easily manage access and share performance data with IT teams and business units.
  • Quickly assess and optimize user experience with web services.

This comprehensive performance management solution also features redesigned, easy-to-use interfaces and workflows, expanded web user visibility, increased transactional intelligence, and enhanced integration with third-party tools and applications.

“The Observer 17 release uniquely positions the Platform for the future of IT and the network manager’s evolving role as a primary enabler of technology adoption throughout the enterprise and as a key troubleshooter,” said Charles Thompson, chief technology officer for Network Instruments. “Other IT teams are looking to the network team for greater application troubleshooting and support. Utilizing the newest features in Observer, they are well prepared for their constantly changing role by achieving quicker root-cause analysis, understanding applications in-depth, and easily sharing performance data with non-network teams.”

Key enhancements include:

  • Intuitive, drag-and-drop interface and streamlined workflows transform Network Performance Management from an ad hoc practice to a proactive, collaborative process. IT can now manage, monitor, and assess performance in two clicks.
  • Third-party system integration via Representational State Transfer (RESTful) APIs facilitate easier sharing of performance intelligence with other groups, as well as integration of monitoring technologies for successful service orchestration for cloud, Software-Defined Networking (SDN) and virtual deployments.
  • Enhanced Web Services Analytics provide greater insight into how end users are experiencing web services through expanded client-based and comparative details.
  • Deeper transaction-level intelligence and correlative analysis allow for quicker and more effective application troubleshooting. With access to greater granularity, network teams are able to more easily assess relationships between transaction details and other performance variables for a higher degree of actionable insights.

The new Observer Platform features allow network professionals a more productive tool to stay on top of key IT trends and challenges such as:

IT Automation—As businesses increasingly transition to automated, self-service IT models involving the deployment of cloud, SDN and virtualized environments, these dynamic services are often being implemented without adequate monitoring. To minimize the ‘black holes’ created when users roll out or move IT services and resources without the network team’s knowledge, the new Observer Platform ties together the provisioning of IT resources with the automatic deployment of monitoring tools via RESTful API for complete visibility.

IT Alignment Across the Business—In delivering Unified Communications and Big Data initiatives, application teams now turn to network teams to lead the charge for metrics, intelligence and app problem solving, and troubleshooting. RESTful APIs and improvements in user management make it easier to integrate Observer 17 into external workflows and processes to help share network performance data with non-network teams.

Monitoring Mobile Experience—With movements to the cloud and increased access to web services by end users on mobile devices, Observer 17 brings a higher level of visibility and insight into how they are experiencing web services, regardless of the device. Observer now provides comparative visibility by browser type and operating system, alongside performance metrics, to determine if the user experience is the same via a desktop or mobile device.

Observer Platform 17 is currently available and includes Observer Apex™ (previously called Observer Reporting Server), Observer Management Server (formerly called Network Instruments Management Server), Observer GigaStor™, Observer Analyzer, and Observer Infrastructure products.

Thanks to JDSU Network Instruments for the article.

Surviving the Supersize

Top convenience store chain triumphs over complete network overhaul – what they learn saves up to$930k per day in lost sales.

Last year alone, total global convenience store sales topped off at $199 billion, not including fuel. Besides all the soda pop, pizza, and aspirin, today’s chains provide a host of other products and services, including ATM, money order, and wire transfers. Behind this 24/7 commitment is a lot of infrastructure, not to mention IT teams working to ensure reliable network and application delivery to power these quick and easy retail options.

Recently a U.S. chain with its headquarters in the Midwest region embarked on a super-sized, multi-year network overhaul that threatened to offline their business. Besides an upgrade to 10 Gb in their data center and disaster recovery sites, they set about virtualizing nearly 90 percent of their servers and implemented two new business-critical applications. Plus, since they had more bandwidth with 10 Gb, they added VoIP as well.

During the 10 Gb upgrade, it was crucial to guarantee visibility into the new applications and evolving technologies, while continuing to provide their end users with the same level of quality service they needed to do their jobs. “Monitoring data is all the same whether it’s on a gigabit or 10 Gb network,” said the LAN/WAN administrator for the convenience store chain. “You need to see it to troubleshoot it.”

To improve data center efficiencies and reduce costs, they first virtualized large portions of their infrastructure. “We wanted to eliminate our physical boxes,” the administrator said. “In addition to obvious infrastructure cost savings, it’s easier to operate in a virtual environment. This is certainly the case with disaster recovery where virtualized infrastructure is much easier to restore.”

Despite the numerous benefits of implementing the virtualized network however, the loss of visibility became immediately apparent. The network team couldn’t get comprehensive views into the communications between all of their virtual servers. This impacted their ability to provide answers to application designers, leaving them dependent upon the server team for information and troubleshooting. Adding this unnecessary step slowed down problem resolution and meant that the team could no longer rely on their existing monitoring solutions.

SOLUTIONS AT THE SOURCE

When you’re running a business with sales of over $3.4 billion per year, every day of downtime has the potential to impact nearly a million dollars in sales. Already familiar with Network Instruments Observer for network analysis, the IT team purchased GigaStor because of its award-winning forensics capabilities and precision-troubleshooting technology.

The appliance allowed network engineers to rewind network activity to the exact date and time that performance problems occurred, revealing the source with clarity. “Once our team saw how effectively Observer monitored current network activity, we knew we could benefit from the retrospective analysis features of GigaStor,” said the administrator. “It quickly became the key asset for resolving any problems we had. It reduced the number of times the network was blamed, and shaved hours to days off the problem resolution process. We could now show other IT teams everything that occurred, and prove the network was functioning properly.”

Network Instruments

To resolve the visibility issue, the network administrator created a SPAN off his vSwitch to mirror virtual communications and push these packets to GigaStor for in-depth analysis. The network team gained full visibility into all virtual networks – and regained network control because they no longer needed to rely on the server team to resolve network problems.

APPLICATION DENIED

Next on the upgrade agenda was the deployment of new applications. IBM Maximo®, an internally-hosted app for asset and inventory management tracks all in-store, IT, and engineering inventory. With the company’s retail locations relying on Maximo to ensure that store shelves stay stocked, it’s one of the organization’s most essential applications. However, as it was first deployed across the enterprise, the program inexplicably shut down.

The network team immediately encountered issues with the software locking up and what appeared to be a loss of network connectivity. Without accurate inventory management, shelves go empty, sales drop, and so do share prices.

Working with IBM Maximo support, the IT team had to prove that the network functioned properly and pinpoint the actual application error.

“We spent a lot of time monitoring users and servers, while simultaneously setting up tests to verify the network was solid and to diagnose the actual problem,” the network administrator said. “While troubleshooting with GigaStor, we figured out it was a Java® error within the software, but there wasn’t an actual error code for the program to relay the error message back to the end user or the developers. Instead, the process would stop and shut down after it timed out. Once we located this, we turned over the GigaStor capture data to the Maximo team. They were then able to confirm and address the application issue.”

Network Instruments

CLOUDY PERFORMANCE

In addition to troubleshooting issues with internally hosted applications like Maximo, the IT team continuously monitors cloud applications. Even though these apps are managed outside of the company’s IT department, user complaints and performance problems are first-reported to the network administrator.

This was the case when the new cloud-hosted self-service payroll application began locking up and freezing. Human Resources had automated its payroll functions by shifting to an online HR self-service application. Designated as a business-critical site, the payroll web service was a new resource that the network team needed to monitor vigilantly, because nobody can afford to miss a paycheck.

“Using pings and synthetic transactions, we were unable to get back the requested data from the site,” the administrator explained. “With GigaStor we verified that while our data was going out, we weren’t seeing the expected data coming back. We shared this information with the provider, and they went back and detected an IP misconfiguration issue on their side.”

MAKING UC BETTER

Finally, it was time for the unified communications (UC) portion of the upgrade. The company had implemented a new hybrid Nortel Networks™ PBX system as a part of a new VoIP installation. But as soon as the system was up and running, it began dropping calls. This resulted in an unacceptable disruption in communications.

They used GigaStor to monitor all voice communications for call quality, consistency, and issue resolution in collaboration with the vendor’s support team. GigaStor quickly proved its worth once again.

“We operate a predominantly Cisco® network and run a pure Nortel VoIP system,” the administrator said. “VoIP problems are often blamed on the network, since Cisco doesn’t support Nortel Systems.”

To troubleshoot, the administrator placed a TAP in the VoIP environment where GigaStor collected data – proving that while the network stayed up, the primary PBX switch wasn’t responding during the issue timeframe. Once the network itself was excluded as the problem source, Nortel used the information provided by GigaStor to resolve the simple problem of a faulty switch.

Network Instruments’ GigaStor appliance played a pivotal role for the network team in the ongoing VoIP rollout support and fine-tuning. “We now rely on GigaStor to supply us with the information needed to get Nortel back on track,” said the administrator.

When it comes to super-sized upgrades, using the right network performance monitoring solutions is essential for effective troubleshooting, and it can save time and money. “Since proving the information using GigaStor, everything’s been running problem free. It’s great,” added the administrator.

FUTURE

As network traffic, volume, and demands increase, the convenience store company is looking to expand the amount of data its GigaStor appliances can retain. “As we deploy new applications and technologies, GigaStor will be central to ensuring the successful performance of our network and company,” said the administrator.

Thanks to Network Instruments for the article.

Observer Infrastructure: Adding The Device Performance Perspective

Network ManagementIn April 2010, Network Instruments® announced the availability of Observer Infrastructure (OI), an integral element of their Observer® performance management solution, which complements their primary packet-oriented network and application performance monitoring products with an infrastructure viewpoint. An evolutionary replacement for Network Instruments Link Analyst™, OI has been significantly expanded and includes network infrastructure device health monitoring, IP SLA active testing, NBAR and WAAS data collection, and server monitoring. These new performance data sources are tightly integrated into a common console and reporting engine, Observer Reporting Server, providing quick and efficient workflows for navigating between packet and polling realms.

Issues

As network technologies have matured and architectural redundancies have improved availability, the focus of networking professionals has turned toward performance and optimization. Along with that shift comes a change in the types of issues demanding attention (away from failures and towards degradations) plus a change in scope (away from network device specifics and towards application and service awareness). Network performance management is the discipline of planning, monitoring, and managing networks to assure that they are delivering the applications and services which customers and end users consume and which underpin business processes. A high-performing network, managed in a way which is business-aware, can become a strategic asset to businesses of all sizes and purposes, and hence operations must also move from reactive firefighting of performance issues towards proactive prevention via methodical and vigilant monitoring and analysis.

Network performance management has been an active area of practice for decades. Initial efforts were focused primarily on the health and activity of each individual network device, mostly using SNMP MIBs, both proprietary and standardized, collected by periodic polling. This approach was supplemented by now-obsolete standards such as RMON for providing views into traffic statistics on an application-by-application basis. Today, additional techniques have been established for measuring various aspects of network performance and are used broadly today:

  • Synthetic agents provide samples and tests of network throughput and efficiency
  • Direct-attached probes inspect packets to track and illuminate performance across the entire stack
  • Flow records issued by network infrastructure devices record traffic details

So which one is the best? And which ones are important in order to achieve best practices for businessaware, proactive performance management? In the end, no single method meets all the needs. The best approach is to integrate multiple techniques and viewpoints into a common operational monitoring and management platform.

Building the Complete Performance Management Picture

Making the transition from reactive to proactive and from tactical to strategic in the long term requires the assembly of a complete performance management picture. And as with any journey, there are options for where to start and how to get there. Most practitioners start with a focus on the network infrastructure devices by adding basic health monitoring to their fault/event/alert regime, but find it insufficient for troubleshooting. Others will deploy packet monitoring, which provides application awareness together with network-layer details and definitive troubleshooting, but find that collecting packets everywhere is difficult to achieve. Still others will look to NetFlow to give them insights into network activity, or perhaps deploy synthetic agents to give them the 24×7 coverage for assuring critical applications or services, but these approaches have their shortcomings as well.

Where you start may not be as important as where you end up. Each measurement technique has something important to add to the operations picture:

  • SNMP and/or WMI polling gives you details about the specific health and activity within an individual network device or node – important for device-specific troubleshooting and for capacity planning.
  • SNMP can also be used to gather specific flow-oriented performance metrics from devices that offer application recognition for optimization and security, such as Cisco’s WAAS (Wide Area Application Services) solution and NBAR (Network-Based Application Recognition) features.
  • Agent-based active or synthetic testing, such as the IP SLA features resident in Cisco network devices, enables regular/systematic assessment of network responsiveness as well as application performance and VoIP quality.
  • Packet inspection, either real-time or historical/forensic, is the ultimate source of detail and truth, revealing traffic volumes and quality of delivery across all layers of the delivery stack, and is indispensible for sorting out the really tough/subtle/intermittent degradation issues.
  • NetFlow (or other similar flow record formats) provides application activity data where direct packet instrumentation is not available/practical.

Ultimately, the better integrated these data sources and types, the more powerful the solution. Integration must take place at multiple levels as well. At the presentation/analysis layer, bringing together multiple types of performance data improves visibility in terms of both breadth and depth. At the data/model layer, integration allows efficient workflows, and more intelligent (and even automated) analysis by revealing trends, dependencies, and patterns of indicators that must otherwise be reconciled manually.

Network Instruments Observer Infrastructure

Network Instruments has introduced Observer Infrastructure (OI) to extend their network and application performance management solution by tightly integrating device-based performance data at both the data and presentation layers. OI adds the infrastructure performance perspective to the packetbased and NetFlow-based capabilities of the existing Observer Solution. It also contributes IP SLA support as well as support for other complementary SNMP-gathered data sets, such as Cisco’s NBAR and WAAS features. OI goes further, delivering visibility into virtualized servers via WMI, at both the virtual machine and hypervisor levels. Another new capability is OI’s support for distributed polling and collection, allowing the infrastructure perspective to be applied across large, distributed managed environments.

Key implications of the newly enhanced Observer Infrastructure solution include:

  • Faster, more effective troubleshooting via complementary viewpoints of performance within enabling or connected infrastructure elements.
  • Better planning capabilities, allowing engineers to match capacity trends with specific details of underlying traffic contributors and drivers.
  • More proactive stance, by leveraging synthetic IP SLA tests to assess delivery quality and integrity on a sustained basis.
  • Improved scalability of the total Observer Solution, via the newly distributed OI architecture

EMA Perspective

ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) analysts strongly advocate the use of a broad and balanced approach to network and application performance management, drawing from the unique and valuable contributions of several types of performance monitoring instrumentation strategies. In parallel, data from such hybrid monitoring architectures must be integrated into presentation and workflows in a way that it can facilitate operational. efficiency and effectiveness and pave the way to proactive practices.

Network Instruments has a well-established footprint in packet-based solutions and substantial experience with infrastructure monitoring. The newly released Observer Infrastructure is an evolutionary product based on stable, long-fielded technologies which has been expanded in functionality and also has been more tightly integrated with the rest of the Network Instruments Observer solution, including presentation through Observer Reporting Server and workflow links to GigaStor™. The result is a powerful expansion to the Observer solution, both for existing customers as well as any network engineering and operations team looking for a comprehensive, holistic approach to network and application performance management.

Thanks to Network Instruments for the article.

Surviving the Supersize

Top convenience store chain triumphs over network overhaul – saves thousands per day in lost sales.

Last year alone, total global convenience store sales topped off at $199 billion, not including fuel. Besides all the soda pop, pizza and aspirin, today’s chains provide a host of other products and services, including ATM, money order, and wire transfers. Behind this 24/7 commitment is a lot of infrastructure, not to mention IT teams working to ensure reliable network and application delivery to power these quick and easy retail options.

Recently a U.S. chain with its headquarters in the Midwest region embarked on a super-sized, multi-year network overhaul that threatened to offline their business. Besides an upgrade to 10 Gb to their data center and disaster recovery sites, they also set about virtualizing nearly 90 percent of their servers.

During the 10 Gb upgrade, it was crucial to guarantee visibility into the new applications and evolving technologies, while continuing to provide their end users with the same level of quality service they needed to do their jobs. “Monitoring data is all the same whether it’s on a gigabit or 10 Gb network,” said the LAN/WAN administrator for the convenience store chain. “You need to see it to troubleshoot it.”

To improve data center efficiencies and reduce costs, they first virtualized large portions of their infrastructure. “We wanted to eliminate our physical boxes,” the administrator said. “In addition to obvious infrastructure cost savings, it’s easier to operate in a virtual environment. This is certainly the case with disaster recovery where virtualized infrastructure is much easier to restore.”

Despite the numerous benefits of implementing the virtualized network however, the loss of visibility became immediately apparent. The network team couldn’t get comprehensive views into the communications between all of their virtual servers. This impacted their ability to provide answers to application designers, leaving them dependent upon the server team for information and troubleshooting. Adding this unnecessary step slowed down problem resolution and meant that the team could no longer rely on their existing monitoring solutions.

Solutions at the Source

When you’re running a business with sales of over $3.4 billion per year, every day of downtime has the potential to impact nearly a million dollars in sales. Already familiar with Network Instruments Observer for network analysis, the IT team purchased GigaStor because of its award-winning forensics capabilities and precision-troubleshooting technology.

The appliance allowed network engineers to rewind network activity to the exact date and time that performance problems occurred, revealing the source with clarity. “Once our team saw how effectively Observer monitored current network activity, we knew we could benefit from the retrospective analysis features of GigaStor,” said the administrator. “It quickly became the key asset for resolving any problems we had. It reduced the number of times the network was blamed, and shaved hours to days off the problem resolution process. We could now show other IT teams everything that occurred, and prove the network was functioning properly.”

To resolve the visibility issue, the network administrator created a SPAN off his vSwitch to mirror virtual communications and push these packets to GigaStor for in-depth analysis. The network team gained full visibility into all virtual networks – and regained network control because they no longer needed to rely on the server team to resolve network problems.

When it comes to super-sized upgrades, using the right network performance monitoring solutions is essential for effective troubleshooting, and it can save time and money.

Thanks to Network Instruments for the article. 

ROI for Network Performance Management

Network Instruments Network Performance Management

Introduction

Besides being technically savvy, today’s network professionals must be aware of the business factors surrounding their department’s activities and responsibilities. This means two things – becoming aware of the way in which the networking team supports the business directly, and understanding how to characterize your infrastructure equipment and management technology investments in terms of business value. Commonly, that means building a business case for any technology investment. This ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) brief focuses on the elements you’ll need to build a business case for investing in application-aware network performance management technology, and the ways in which Network Instruments® solutions align to help you to achieve returns.

Building the Case

Once you’ve embraced the need to deploy application-aware performance management tools, you’ll need to secure funding, set expectations, and ultimately measure the results of your project. The business case should accommodate a couple of major items: the cost of the solution, the qualitative and quantitative benefits that will be realized, and the timeframe over which all of this will take place.

Finding the Treasure

Implementing application-aware performance management tools should yield savings in two general categories: more efficient problem resolution, often referred to as reduced Mean Time to Repair/Restore (MTTR); and better avoidance of problems in the first place, often referred to as extended Mean Time Between Failures (MTBF).

MTTR reduction is usually the number one reason performance management solutions are deployed. There are several elements to consider, all of which help to shorten the time it takes you and your team to recognize issues, get to the bottom of problems, and get applications and services restored. You will need detailed and comprehensive visibility into what’s happening on the network, powerful analysis tools to help you interpret what you see, a tightly integrated workflow, and easy ways to share data and collaborate with peers and end users.

Second, extending MTBF is achieved by recognizing and proactively avoiding problems before they happen. One area of such savings will be the use of historical trending reports to analyze traffic growth, allowing network engineers to better execute informed capacity planning. An important side benefit is the opportunity to avoid costly and labor-intensive network (and especially WAN) upgrades by revealing the mix of critical and non-critical traffic. Another area of MTBF advantage is the recognition of early indicators of non-network problems that can be gathered with a network-facing performance management solution. In this case, you should look for key indicators, like application response time or VoIP call quality, that show possible problems worthy of proactive investigation by other IT teams.

Finally, to justify your investment you’ll need to estimate some key current factors: the cost in lost business or productivity your organization suffers (or could suffer) when applications are not available or are degraded (often in $ per hour), and the current frequency and average length of outages and degradations. If you have no idea of the cost of downtime for a critical application, you can start with common industry benchmarks. EMA estimates average enterprise downtime to be $50,000/hour. Your total projected benefits will come from estimating reduced MTTR and extended MTBF multiplied by the cost of downtime for your shop.

The Price of Passage

Now let’s look at the other side of the equation – the cost of the technology solution you are considering. Make sure you include all potential costs when you build this side of the business case. You’ll need to account for software licenses, hardware platforms, maintenance and support subscriptions, training, and the cost of deployment including required services or consulting. Look for opportunities to save with quicker deployment, shorter learning curves, and pre-integrated solutions that avert custom implementations.

Closing the Case

Now that we have the two sides of the equation, cost and benefit, all that’s left is doing the math. We mentioned earlier that timeframe is important, and here is where it comes into play. Organizations use a wide variety of metrics, but some of the most common are Return on Investment (ROI), Internal Rate of Return (IRR), Time to Payback, and Total Cost of Operations (TCO). Another item to consider is that – if your purchase is going to be classified as a capital expenditure, you may not need to account for all of the upfront equipment or licenses costs in the first year due to multi-year depreciation schedules, which spread capital cost over a period of time, typically 3 or 5 years. If you don’t want to get into the complex accounting, you can get a quick idea of the potential payback by simply dividing the three-year total benefits by the three-year total costs. If the number is greater than one, then you’re not costing the organization money. If it’s bigger than three, your payback is less than one year. If it’s bigger than six, your payback is less than six months.

EMA Perspective

EMA has long been an advocate for integration of management disciplines, tools, and practices, which in the case of infrastructure performance management means becoming application-aware by moving your focus up the stack. There are many options for achieving such awareness, and it is essential to build your business case in order to secure funding and support from the broader organization.

EMA has reviewed the Network Instruments solution and believes its product features and capabilities align well with the savings objectives outlined above. For example, when considering MTTR reduction, the deep real-time and historical visibility provided by their Observer® and GigaStor™ instrumentation products brings application awareness via packet-based monitoring across a broad range of domains including voice,video, wireless, LAN, and WAN. Efficient top-down troubleshooting workflows via tight integration between Link Analyst®, Observer, GigaStor, and the Observer Reporting Server shorten isolation time and improve collaboration. And perceptive analysis and presentation features such as Application Transaction Analysis, Conversation Analytics, Expert Analysis, and Stream Reconstruction accelerate the data interpretation process. Relevant to extending MTBF, Observer Reporting Server and its NetLive real-time NOC reporting provides the current information to quickly act on early problem indicators, as well as the detailed long-term reports essential for well-informed capacity planning and asset optimization. Finally, Network Instruments solutions scale well to all sizes of managed environments, both in terms of entry cost and scalability.

On the cost side of the equation, products like the Network Instruments suite that deploy rapidly, with little or no services required and low learning curves due to intuitive graphical user interface designs, help keep costs down. Further, their products come in a tightly integrated, bundled appliance form which can also substantially reduce total solution cost and deployment resource demands.

In conclusion, Network Instruments, given their breadth of capabilities, tight cross-product integration, and compelling cost/benefit balance, should be on your list for consideration as you build your ROI business case for performance management.

Thanks to Network Instruments for the article.

Don’t Deprive Your Mobile Workers Of UC

Unified Communications BYODThe bring your own device movement is evolving rapidly, and companies that neglect to optimize their enterprise mobility strategies may fail to keep pace with forward-thinking competitors. The most effective BYOD programs provide employees with the tools required to enhance productivity, improve communications and collaborate with coworkers in disparate areas. Companies can benefit tremendously by enabling a mobile workforce, but they must also adapt their strategies to accommodate BYOD participants’ needs. The adoption of a unified communications program is a good place to start.

Decision-makers are realizing the need for UC

It seems that many IT leaders are now recognizing the importance of a UC solution, albeit being rather slow to do so. A new Evolve IP study, which surveyed 974 IT and executive decision-makers, found that 84 percent of organizations that do not currently have a UC strategy are considering or planning to implement one within the next one to three years. The study also examined the link between UC and BYOD, finding that 60 percent of companies that are leveraging UC services also have a work-from-home program.

Of the various UC services available, video conferencing seems to be a particularly hot commodity. The study revealed that 72 percent of organizations are using some form of video, whether it be a large conferencing system or a one-on-one desktop solution. Additionally, audio and web conferencing was found to be most requested UC feature, with unified messaging and instant messaging and presence ranking second and third, respectively.

Meanwhile, a recent “Technology Trends 2014” study from Computer Economics identified UC as one of six “low-risk, high-reward” enterprise technologies, according to Redmond Magazine. Computer Economics’ John Longwell asserted that UC is becoming a more prominent feature in companies’ infrastructures. This should come as no surprise given the growth of BYOD and the need for firms to maintain control over enterprise mobility.

Why your BYOD program needs UC

By adopting UC solutions, organizations are able to mitigate many of the risks associated with BYOD. For instance, UC services like the VoIP system help businesses ensure that their ever-expanding mobile workforce stays connected. With VoIP features like voicemail to email, employees are able to integrate multiple communications systems into one cohesive interface. A comprehensive UC suite allows businesses to improve response times, increase agility and maximize the benefits of their BYOD programs.

Thanks to TEO Technologies for the article.