8 Application Troubleshooting Timesavers

How often have you been told the network is at fault, when your gut tells you it’s an application issue? The primary challenge is determining where to look. If this sounds all too familiar, you’re in good company.

In the recently released Sixth Annual State of the Network Global Study, 68% of network pros identified isolating the problem origin as the toughest challenge when troubleshooting applications. Additionally, application problems eat up a lot of time. According to IT analyst firm TRAC Research, teams spent on average 46.2 hours per month in war-room meetings getting to the bottom of application outages.

In this article, we’ll explore eight critical capabilities you need in your performance monitoring solution to stay on top of application performance challenges, improve troubleshooting accuracy, and reclaim lost time in your day. The following chart looks at each monitoring component’s benefits and function, as well as showing the feature in action.

1) View service delivery enterprise-wide

Quickly evaluate overall performance


  • This is critical for assessing performance across an organization, business units, and locations.


  • Dashboard views provide an aggregated and immediate understanding of the health of services alongside underlying applications, network, and infrastructure components.
  • Couple with logical workflows to determine problem scope and severity and to perform root cause analysis.


2) Baseline your unique network environment

Utilize baselines to set alerts for your network


  • By establishing what’s typical for your network today, you can determine what is atypical in the future.
  • Spot issues before they become serious and triage response based upon the severity of the deviation.


  • Baseline service delivery components to spot when conditions arise that might be detrimental to overall application performance.
  • Track any metrics seen by the end user, such as propagation delay, response times, and errors.


3) Understand application connections via dependency maps

Which apps are fighting for resources?

Identify all components associated with app delivery


Learn how network devices are connected and where to troubleshoot when issues occur.
Ensure all application components are successfully migrated when moving to new environments (i.e. virtual, cloud, or new datacenter).


Automate discovery and mapping of application interdependencies based on communications data and application tier-to-tier interactions.


Where in the Observer Platform.

4) Integrate performance perspectives

Pinpoint problem source by network, system, or application


  • Gain quick root-cause analysis by having metrics for all potential problem causes at your fingertips.
  • Improve performance optimization by understanding how application, network, and infrastructure affect each other.


  • Establish aggregated views of application, network, and infrastructure via polling technologies and packet capture.


5) Correlate performance variables

Performance Reporting

Understand relationships and identify root-cause at a glance


  • Eliminate finger-pointing between IT silos by ruling out potential problem causes.
  • Boost troubleshooting accuracy by viewing relationships between variables and isolating the actual cause—not the symptom.


  • View multiple variables in a single correlative graph, allowing relationships between various performance conditions to be easily identified.


6) Streamline troubleshooting with expert analytics

Easily locate point of communication breakdowns


  • Save time automating exhaustive manual processes and obtaining potential solutions.


  • Provides immediate real time and post capture intelligence for TCP, UDP, UC, VoIP, and wireless traffic.
  • Uses graphical experts, such as MultiHop and Connection Dynamics, to automate locating problem packets or communication failures.


7) Assess applications in-depth

Are users and apps getting along?

Investigate logically from high-level to the problematic transaction


  • It’s necessary to get to the bottom of an application issue and provide effective intelligence to the appropriate IT team.


  • Application-specific transaction, error, and request data let you know application conditions and where degradation is occurring.
  • View application error conditions alongside response time to understand what is causing an application to break down.
  • Generate application-specific reports meaningful to other IT teams.


8) Proactively configure triggers and alerts

Which apps are fighting for resources?

Configure alerts to stay ahead of problems


  • Intelligently configured alarms are key to getting out ahead of the curve and spotting issues before they impact users.


  • Customize alerts to your environment based upon benchmarks.
  • Trigger pre-defined actions to occur when network conditions are met, making management simpler and more predictable.


Use these guidelines as a starting point for mapping out and evaluating what performance monitoring solution will meet your network team’s needs and ensure you stay ahead of pressing application challenges. Your time is too valuable to spend in application outage meetings. Check out additional resources on monitoring application performance.

Thanks to Insight IT News & Views for the article.

Getting the Most Bang for Your Contact Centre Technology Buck

This one hour presentation from CMIQ brings together 4 panelists to help you understand the pitfalls of implementing new technology today. When implementing technology we are constantly asked to do more with less, while maintaining or drive better customer experience, and increase or improve our return on investment.

The first challenge in implementing new technology is to get funding, especially when the center can be considered a cost center. We then have to pick the right technology, either a custom design for your center or a standard that is on the market. As we implement these different systems it then becomes difficult to pull reports on the center as each system is reporting individually and not reporting as a whole system.

Our four panellist discuss these issues in implementing new technology, and how you can avoid the common mistakes and learn from their experience.

Fed-up Telus takes government to court over spectrum transfer rules

Canada’s Telus has filed a Federal Court case against the government opposing the recent policy decisions on restricting wireless spectrum licence transfers, in which the operator says it is seeking clarity on the legality of last month’s rulings, which effectively blocked Telus’ attempted takeover of smaller cellco Mobilicity, Reuters reports. The spectrum transfer policy aims to block any transactions which result in ‘undue spectrum concentration’, but Telus claims that it could result in foreign entrants gaining advantages over Canadian firms. Recent reports say that US wireless giant Verizon is negotiating on the potential takeover of second-tier Canadian cellco Wind Mobile as well as looking at the possibility of acquiring Mobilicity ahead of a nationwide 4G licence auction. Telus argues that when Mobilicity and Wind bought their spectrum, they (and other new entrants in the 2008 AWS licence auction) did so under the understanding they could sell those licences to larger incumbents after five years as per regulations, but the new spectrum transfer policy makes it unlikely that either Telus or its national peers Rogers or Bell would be allowed to make such a purchase under rules requiring a case-by-case examination, and Industry Canada, the ministry with final say on licensing, having explicitly stated that it does not support the resale of the AWS licences to the three incumbents.

Thanks to TeleGeography for the article.

Network Instruments’ survey shows growing adoption of trends, but concerns about visibility into the network remain.

Cloud, UC, BYOD Making Network Monitoring Difficult: Survey - See more at: http://www.eweek.com/networking/cloud-uc-byod-making-network-monitoring-difficult-survey#sthash.V8gnWYVg.dpufCloud computing, unified communications and BYOD promise to bring big benefits to organizations, from greater collaboration and productivity to improved efficiency and lower costs.

However, the trends, which are hitting the data center at the same time, also pose some significant challenges, not the least of which is gaining enough visibility into the networks to ensure that the IT staff can properly manage and secure them, according to a survey by Network Instruments.

“The technologies are kind of being forced on them,” Brad Reinboldt, senior product manager at Network Instruments, told eWEEK. “They need the technology,” but need the tools to manage and monitor them properly.

Among the findings in Network Instruments’ Sixth Annual State of the Network Global Study were that organizations are saying that bring-your-own-device (BYOD) technology is the most difficult to monitor, and that bandwidth demand will continue to spike as these new services and technologies are incorporated.

The survey by Network Instruments, which makes and sells network management solutions, was released July 23. The results were drawn from responses from 170 network engineers, IT directors and CIOs in a number of regions, including North America, Asia, Europe, Africa, Australia and South America.

For the various data center trends, the company found that most IT professionals understood the benefits cloud computing, BYOD, unified communications (UC) and faster bandwidth will bring to their companies, but also worried about managing and securing the company’s data.

For many businesses, UC is quickly moving beyond voice over IP (VOIP) and into new areas, including videoconferencing, Web-based collaboration and messaging. VOIP deployments are staying around 70 percent, but 62 percent of respondents said they have deployed videoconferencing, and more than 60 percent have deployed instant messaging. Adoption of videoconferencing and instant messaging both grew more than 35 percent over the last four years, and more than half of organizations this year have deployed Web collaboration applications, such as Cisco Systems’ WebEx.

“Traditionally, UC was very focused on the voice aspect,” Charles Thompson, director of product strategy at Network Instruments, said in an interview with eWEEK. “We’re really seeing people adopting more than just voice.”

That’s bringing with it some monitoring problems, Thompson said. More than two-thirds of the respondents said their biggest challenge is gaining visibility into the user experience, and UC tools won’t be utilized to their full potential if users are reluctant to use them because of latency or jitter problems with the video, for example, he said.

Respondents also said they were concerned about the difficulties assessing bandwidth used by UC programs and the inability to view communications at the edge of the network.

In last year’s survey, 60 percent said their organizations had adopted cloud computing. That number jumped to 70 percent this year, with 39 percent having deployed private clouds and another 14 percent leveraging external private cloud services, such as Amazon Virtual Private Cloud, Savvis Symphony Dedicated and Citrix Systems’ Cloud.com.

Most organizations expect that about half of their applications will be in the cloud within the next 12 months, with the top cloud services being email at 59 percent, Web hosting at 48 percent, storage (45 percent), and testing and development (41 percent).

Twenty-three percent of respondents said they had moved VOIP into the cloud, though only 16 percent had migrated complex services, such as enterprise resource management, in that direction.
Data security remains the top concern about the cloud, with 80 percent calling it the number-one worry. Other concerns include compliance challenges, the lack of ability to monitor the user’s experience and to assess the impact cloud is having on network bandwidth. However, 43 percent said the availability of applications in the cloud had improved, and 37 percent said the end-user experience in moving to the cloud also improved.

The adoption of 10 Gigabit Ethernet in the data center is rising rapidly, with 77 percent of respondents saying they will use the technology within the next 12 months, a growth of 52 percent over the last four years. Twenty percent said they will migrate to 40GbE within the next year.

Businesses are anxious to get to 40GbE to help ease bandwidth issues caused by such trends as UC, BYOD and cloud, Network Instruments’ Reinboldt said. “There’s just too much data,” he said. “There’s so much pushing through the pipe … they can’t wait anymore.”

With applications and networks growing in complexity, resolving problems increasingly becomes an issue. The biggest concern in this area was the inability to identify the source of the problem, according to 70 percent of respondents. Another third said they were still having trouble with bandwidth, according to the survey. 

Thanks to eWEEK for article.

Bell Aliant expanding FTTH to NorthernTel footprint

Bell Aliant has announced the expansion of its fibre-to-the-home (FTTH) network to additional locations in northern Ontario, having previously covered only one market in the province, Sudbury (the largest settlement in the telco’s Ontario telecoms service footprint) alongside its core Atlantic Canada provinces. Yesterday, the telco announced it is investing CAD7 million (USD6.8 million) to expand its ‘FibreOP’ FTTH service to Timmins, the largest market in group subsidiary NorthernTel’s northern Ontario telecoms footprint, where the direct fibre access service will cover approximately 15,000 premises and will be branded ‘NorthernTel FibreOP’, according to a press release. Karen Sheriff, CEO of Bell Aliant, noted that the extensive aerial infrastructure in Timmins makes the location ideal (i.e. cheaper to roll out) for FTTH technology. The latest announcement follows two other launches in northern Ontario in the last couple of months, Aliant having expanded FTTH to North Bay (around 20,000 premises to be covered, costing around CAD13 million) and Sault Ste. Marie (approximately 24,000 premises, investing CAD15 million). Through its strategic partnership with Bell Canada, FibreOP products in North Bay and Sault Ste. Marie are marketed under the Bell brand, including FTTH services sold as ‘Bell FibreOP’. Meanwhile, FTTH coverage in the greater Sudbury area has reached more than 33,000 premises. FibreOP passed 679,000 premises in around 40 communities in Atlantic Canada and northern Ontario by the end of March 2013, while Aliant’s goal is to reach 800,000 FTTH premises passed by the end of 2013.

Thanks to TeleGeography for the article.

Observer 16.1: Top Five UC and App Monitoring Features

Top 5 Observer 16.1 Features

Since David Letterman already has his Top 10, we’re going with our Top Five to outline the best features from Observer® 16.1. The new version is packed with features to help streamline the process of managing and troubleshooting complex applications, including VoIP and videoconferencing. New correlation powers provide an immediate understanding of service and infrastructure relationships, while global search quickly gathers and presents relevant details about a user, server, or application for immediate action. In addition, we’ll dive into the things you’ve come to expect with releases – expanded application transaction details and unified communications (UC) support. There are other hidden gems for our security-concerned and mobile carrier customers, but that’s another story.

5. VoIP & Video Extract

Save hours of troubleshooting via automated call location and extraction
Assembling VoIP and videoconferencing calls and pinpointing the cause of degraded performance can be very time-consuming. VoIP and video extract in GigaStor™ automate this. Rather than manually filtering and reconstructing call elements, enter basic information such as the phone number, Caller ID, or name, and let GigaStor do the rest.

4. New T.38 Support

Complete UC monitoring with added tracking for Fax over IP
In expanding support for emerging UC services, we added in-depth monitoring of Fax over IP (T.38). Assess fax submission success or failure, and verify details including page count.

3. Enhanced Performance Correlations

Assess relationships between multiple performance variables for problem resolution
Performance correlations combine metrics from the many performance data sources within Observer Reporting Server (ORS) onto a single graph, giving engineers a holistic view of performance and an at-a-glance assessment of variable relationships. Understanding these relationships can be critical for ruling out problem causes, fine-tuning performance, and justifying future IT investments.

2. Expanded Application Transaction Support

Validate your AAA secure access protocols
We continue to expand application transaction analysis in Observer. Now obtain detailed analysis for two important secure access protocols, RADIUS and Diameter. Confirm access responses, validate application status, and review error messages.

1. Global Search

Quickly find and isolate relevant analysis
When you open a web browser, you typically begin with a search. We’ve brought this same capability to ORS. When investigating a network fire, you often know the user, service, or server being impacted. With global search, quickly scour across multiple sources of performance health, and retrieve a view of the health, performance details, and indicators tied to that device, user, or application.

Hidden Gems of Observer 16.1

While you have already read about the application and UC monitoring features in the release, there are also many “hidden Observer gems.” In this article, we’ll focus on uncovering a few.

Removing the Guesswork from LTE Monitoring

For our carrier customers implementing 4G LTE, obtaining visibility and in-depth analysis into subscriber experience on new IP-based infrastructure can be difficult. Using subscriber extract, the Observer platform takes the guesswork out of tracking and validating user experience. Subscriber extract tracks, reconstructs, and presents all relevant details; subscriber identifier information plus associated conversations and activities for investigation to the carrier IT team. In addition, Observer now fully supports VoLTE, ensuring comprehensive visibility to validate and troubleshoot voice, video, and data delivery across 4G networks.

Expanded Hardware Security Module (HSM) Support

For users concerned about FIPS and Payment Card Industry (PCI) compliant performance monitoring, Network Instruments Management System (NIMS)™ integrates with PKCS #11, providing expanded support for HSMs. When users need to decrypt Secure Sockets Layer (SSL) data, NIMS interfaces with the HSM via the PKCS API to obtain the SSL certificate info necessary to perform the decryption. This allows the user to analyze the data and effectively troubleshoot potential issues, while maintaining the integrity of SSL encryption keys — and ensuring secure access to the content only when requested with detailed logging and role-based permissions in NIMS.

Utilize Network Packet Broker (NPB) Timestamps

If you use NPBs or matrix switches from leading manufacturers like cPacket Networks®, Gigamon®, Ixia® Anue™, and VSS Monitoring™, leverage their timestamps to assess time changes between when a packet enters and leaves the NPB.

Thanks to Network Instruments for the article.

Ottawa Regional Contact Centre Association Call for Nominations

ORCCA events banner

Ottawa Regional Contact Centre Association


We’re planning an extra special event this year for our 10th Anniversary! We’ll be celebrating in style at the new Ottawa Convention Centre on September 12th.

Call for Nominations

The Ottawa Regional Contact Centre Awards will publicly recognize Contact Centre Agents, Support Associates, Managers and Contact Centres who consistently deliver quality service to their internal and external customers.

A candidate nominated for a Contact Centre Award for Excellence acts as an ambassador who fulfills the vision of performance excellence in the eyes of the customer, the organization, and other team members. To be considered for this prestigious award, nominees will be required to qualify in four categories pertinent to their role.

Nominating is easy, just fill out the form linked below:

* 2013 ORCCA Award for AGENT Excellence

* 2013 ORCCA Award for SUPPORT of Excellence

* 2013 ORCCA Award for MANAGER Excellence

* 2013 ORCCA CONTACT CENTRE of the Year – Intent to Nominate

Rogers offers nationwide fixed line replacement service

Rogers Communications has launched a home and small business phone solution that operates on its national cellular network. The Rogers Wireless Home Phone service for consumers and the Rogers Wireless Business Phone is available in regions across Canada where Rogers’ residential (cable network-based) and business landline (cable and some PSTN) services are not available. Pitching the new wireless telephone service as a low-budget option to replace fixed line services provided by other operators in areas outside its fixed telephony footprint, Rogers has set an ‘introductory’ price of CAD9.99 (USD9.68) per month for new and existing Rogers wireless customers, while those signing up can use any fixed line telephone, although they need to purchase an adapter device (incorporating a SIM card) for a one-off fee of CAD29.99 to connect calls over the Rogers wireless network. Subscribers can also keep an existing phone number. The service includes unlimited Canada-wide calling, voicemail and caller ID, while customers can also subscribe to additional value-added services (VAS) including call waiting, conference calling, call forwarding, and international long-distance packages. The basic monthly tariff for non-Rogers wireless customers is CAD24.99 per month.

Thanks to TeleGeography for the article.

New Candela LANforge FIRE & ICE: 5.2.9 is released

Candela LANforge Fire Candela LANforge ICE

Release 5.2.9 includes WiFi performance improvements, new GUI scripting features, a new graphical LANforge configuration tool for Windows, IPv6 support on newer versions of Windows and more. The Live DVD image is now based on Ubuntu 12.10 and the 3.9 kernel. See the Release Notes for more details.

HeartBeat™- IVR Monitoring Service from IQ Services

HeartBeat™ Availability and Performance Monitoring for Voice Communications Solutions

Contact center and communications solutions offer companies a significant opportunity to control costs and improve customer satisfaction if maintained properly. If solution performance is not maintained after implementation, the opportunity becomes a risk and eventually customers are negatively impacted. With HeartBeat™ availability and performance monitoring, you validate the performance of your end-to-end contact center and communications solutions to assure the best possible experience for your customers and the best possible efficiencies and savings for your company.

IQ Services’ HeartBeat™ availability and performance monitoring goes beyond the traditional perspective of internal monitoring, by providing detailed and actionable information from the outside in or customer perspective – which is ultimately the perspective that matters most. Because HeartBeat™ calls are generated remotely and interact with your contact center and communications solutions just like real customers, you know what your customers are experiencing around the clock. You decide when and how to fix issues and optimize performance. Every component of your solution that is normally exercised by a customer can be monitored with HeartBeat™ availability and performance monitoring. If the host is down or there is an issue with your toll-free number provider or your system is dropping calls after playing just 3 seconds of the initial greeting, you’ll find out right away.

Instead of receiving anecdotal complaints or unsubstantiated claims from irritated customers and colleagues, HeartBeat™ availability and performance monitoring is configured to notify you and your team immediately if anything unexpected happens during a HeartBeat™ call. You also have access to robust, actionable data for each call – whether it goes as planned or not – including recordings, response times and results to help you identify and resolve issues as quickly as possible and to provide insight into ways you might optimize performance over time.

HeartBeat™ availability and performance monitoring is a 24×7 service. HeartBeat™ calls are generated at a specified hourly rate (e.g., 2 calls per hour, 12 calls per hour, etc.) and exercise your solution just like real calls traversing the PSTN by making appropriate DTMF and spoken inputs. IQ Services’ patented technology and proven methodologies are used to verify step responses, measure response times and capture other actionable data from each HeartBeat™ call so you know your solution is performing as expected 24×7.

The results of HeartBeat™ availability and performance monitoring give you confidence about your solutions’ performance and help you answer the lingering questions including, but not limited to:

  • Are your customers able to access my solution RIGHT NOW?
  • Do customer calls successfully get through the public telephone network?
  • Are the calls being properly handled by your contact center solution?
  • Do response times at key steps in the calling process meet your requirements for end-user customer experience?
  • Does your contact center solution perform the same at all times of the day?
  • Has something changed in the solution or production environment that has not been communicated or evaluated for impact on your customers?
  • Do any trends in system performance indicate it is time to tune the system configuration or upgrade capacity?

IQ Services’ techniques for monitoring contact center and communications solutions produce unmatched accuracy — and provide you with timely results:

  • IQ Services’ patented test process, Audio Time Analysis, is proven to be extremely accurate in determining proper operation and identifying specific error conditions. Such precision is especially important in testing speech recognition systems. With these systems, it is crucial to know that each spoken phrase was correctly interpreted before the target system moves to the next step in the test sequence. Audio Time Analysis is so accurate it can usually tell the difference between two recordings of the same phrase spoken by the same person.
  • IQ Services’ patented Screen Pop Testing methodology turns the tables on usual testing and monitoring procedures to cost-effectively validate CTI and screen pop functionality at the desktop as well as to obtain user experience results.
  • IQ Services immediate notification process ensures your support team is made aware of issues within minutes of detection.
  • IQ Services’ secure, online monitoring results application – MonitorControl.net – provides timely monitoring results including access to recordings, response times, self-service controls and more.

Thanks to IQ Services for the article.