Webinar- Maximize your ROI: Using StableNet® in a Managed Services Environment

Webinar-Maximize Your ROI Using your StableNet in a Managed Services EnvironmentToday’s network management systems landscape is constantly transforming. However ROI can be achieved by implementing StableNet®. This will enable element management consolidation, optimization of hosting environments, greater service management control and many other benefits. Join David Poulton, COO Infosim UK, for this webinar and see how you can maximize your managed services ROI using Infosim StableNet®

15 Reasons Why You Need APM in 2014

Application Performance ManagementApplication Performance Management (APM) is the must-have technology of 2014. Even if you thought your company could get away without at least some sort of APM technology in the past, we are entering a new era of IT where that is no longer possible.

Today, there is a convergence of factors coming together like a perfect storm, making APM more important than ever. It is not just a matter of this factor or that factor, but a whole range of factors that are transforming IT, transforming APM, and transforming the world as we know it.

“With today’s rapid technological change, poor service delivery is too common – 75% of IT organizations are suffering from degraded business applications, according to IDG research services. According to Gartner, 70% of the time, IT organizations learn about performance problems from end-users. 31% of performance issues take more than a month to resolve or are never resolved, according to Forrester. Why? Applications are complicated,” explains Dimitri Vlachos, VP of Marketing and Products at Riverbed. “Each evolution of technology continues to add another level of complexity. This complexity knows no boundaries, and more complicated problems will occur – without a doubt, increasing complexity is among the biggest factors making APM necessary for your survival today.”

“It’s all about complexity,” IBM’s Jim Young agrees. “Multiple innovations and trends in SaaS, Cloud, virtualization, BYOD, etc. are converging to create the greatest challenge facing application administrators today – unfathomable complexity.”

“It seems trite to say IT environments are getting more complex, but the reality is that they truly are,” adds Matthew Selheimer, VP of Marketing, ITinvolve. “Delivering an application for the business can now stretch across mobile, cloud, and legacy systems with hundreds of dependencies between them along with numerous policies and regulations that govern them. No matter how complex these environments get, the business doesn’t care and simply expects they will perform whenever and wherever needed with a user experience that at least matches what they see in their personal lives. In this world we now operate in, APM is even more critical to the success of your business than ever.”

On this list of 15 Reasons Why You Need APM in 2014, industry experts – from analysts and consultants to users and the top vendors – show just how many factors are coming together this year to make APM an absolute necessity.

In terms of the solution, our experts all agree that APM is critical. Some experts add that capabilities complementary to APM, such as End User Experience Management (EUEM) and IT Operations Analytics (ITOA), are also essential.

These reasons are not in order of priority. Any one of the challenges listed below makes APM a necessity. But the onslaught of all these factors together in 2014 just might make APM a key to survival in today’s IT-centric world.


We have entered the age of the customer, a 20 year business cycle in which the most successful enterprises will reinvent themselves to systematically understand and serve increasingly powerful customers. This era is fueled by innovations in mobile, social and other technologies which organizations must embrace in order to engage and delight customers. Central to this, applications that engage and delight customers are a key catalyst to commercial success. This means that in 2014, APM solutions will become more important than ever before as it’s not just about internal application monitoring but the availability, performance and experience management of external, customer applications.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research

Forrester: “Age of the Customer” Defines Business for Next 20 Years

User experience is dramatically impacted by the performance of the physical, virtual or mobile device used to access apps, the latency associated with different locations, remote display protocols, the impact concurrently executing apps have on each other, the client-side app execution time, and by user behavior itself, for example. This user-centric approach enables enterprises to see exactly what their end users’ see in order to address their business critical enterprise initiatives such as Mobility and BYOD, SLA and Change Management, VDI Migrations, Staff Capacity Management, and many others. This is why End User Experience Management matters and why it is a critical complementary technology to APM.
Trevor Matz
President and CEO, Aternity


Businesses are more reliant today on e-commerce and the web than ever before, and web and mobile customers are hard to please. They will judge performance against the best experiences they have had and expectations today are high. The smallest delay in response on a website or mobile app can lose that user. APM is therefore essential to both pre-empt problems on the server side before they impact end users, and end user experience monitoring on the client side to gain an accurate view of user experience.
Michael Azoff
Principal Analyst, Ovum

The continual consumerization of IT technologies is creating the need to understand user experience and behavior, leading to integration of analytics technologies to the APM stack.
Jonah Kowall
Research Director in Gartner’s IT Operations Research Group

As more apps/data/transactions move to the Web, uptime is paramount. This has become business critical as C-level executives expect IT to evolve from being a cost center to a business enabler more aligned with overall corporate strategy and objectives. APM provides value to customers through advanced analytics, helping them gain real-time, actionable intelligence from their operational data to assure service delivery and user experience in a cloud and mobile-centric computing environment.
Gabe Lowy
Technology Analyst, Tech-Tonics

A recent Quocirca research report, In Demand: The Culture of Online Service Provision, showed that two thirds of organizations are transacting with external users on a regular basis. This includes consumers and users from business customers and partners. The applications that drive these transactions sit at the core of business processes and income streams. The research shows the extent to which these organizations are investing in advanced network and application support technology as well as using flexible infrastructure, including virtualization and on-demand services, to ensure adaptiveness. All this would be to no avail if they were not able to maintain an overall view of the end-to-end user experience and ensure that these business critical applications continue to deliver day-in, day-out.
Bob Tarzey
Analyst and Director, Quocirca

In 2014, the successful delivery of applications that directly win, serve and retain customers will increasingly be a catalyst for commercial success but also, more importantly, will promise to enrich the brand. In 2014, APM solutions will be required in order to unify the views and information regarding the health of applications to both line of business and IT support teams. APM solutions deployed in the right way can help to promote collaboration by presenting information in context so that performance and availability insights can be made in order to avert application issues from directly impacting the revenue generating customer.
John Rakowski
Analyst, Infrastructure and Operations, Forrester Research


With requirements for higher levels of automated insight driven from the growing need to integrate agile and DevOps into a fully cross-IT initiative, APM is a great investment especially when it has hooks. APM plus service modeling with hooks into a change impact and change management system can help to drive new levels of automation across your entire IT organization, from service desk, to development to operations. It can be the spark that kindles the fire of real-world success in the age of agile.
Dennis Drogseth
VP of Research, Enterprise Management Associates (EMA)

You don’t need APM in 2014 – you need it in 2014 and beyond, and let’s face it, we needed APM 5 years ago. I think the real question is “how does APM need to evolve” in 2014. And I think there are two parts to the answer. In 2014, APM needs to evolve to serve the DevOps movement which is the unification of traditional application and IT operations silos. Some APM vendors have taken steps in that direction, but in reality, many APM solutions don’t give enough visibility across all infrastructure silos – which is actually what Gartner expects that IT Operations Analytics will do for APM users. Secondly, I think APM needs to evolve to allow users to relate to the business impact that application performance has. If the app is slow, it’s much worse if sales are dropping as well. At Netuitive, we expect some combination of APM and IT Operations Analytics to finally bring together a view across application, infrastructure AND business silos.
Graham Gillen
Director of Product Marketing, Netuitive


In 2014, APM will be a required foundation for accelerating application innovation through tighter business-IT alignment. To date, there has been an ongoing struggle between business teams, who want to bring innovative products and services to market more quickly, and IT teams who must support these rapidly changing services. Business teams realize that if they cannot speed up the delivery of quality applications to end users, the competition will. But IT teams have not been able to keep up. Without performance management to align business and IT teams during periods of accelerated change, organizations are only heaping on more problems. Only APM provides non-biased measurements of success or failure, which can speed up the adoption of new technologies into the environment. APM is invaluable in helping IT teams to ingest change more quickly, which in turn helps organizations overall to deliver greater innovation faster and enhance competitive edge.
Stephen Wilson
Director of Solutions Marketing, Compuware APM


In 2014 APM can help IT “get more done with less” resources. Research from the analysts on IT budgets have not shown significant increases in headcount allocation for 2014. Yet, the criticality of IT in ensuring application availability and performance is still very important. APM can help manage the growing workload and address the need to keep applications running and delivering services to customers. Today’s APM, using real-time analytics will reduce the need for “eyes-on-screen monitoring”. By analyzing multiple event streams and presenting a correlated view of the situation applications are in, APM can reduce false alarms and more rapidly drive root-cause. APM’s ROI in 2014 will be improved application availability and performance – getting more done with less.
Charley Rich
VP Product Management and Marketing, Nastel Technologies

The need to do more with less, as scarce IT resources are asked to continue to provide high availability, improve efficiencies and speed up time to market. This is where APM comes in, collecting and analyzing metrics that will support an amplified feedback loop back to App Dev and Operations for performance improvements and stabilization. Follow the classic management philosophy of what you want to improve you measure and what you measure you improve, APM will do just that.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group and Founder of the APM Strategies Group on LinkedIn.

CIO’s, and CEO’s, will need APM in 2014 to further drive cost and inefficiency out of the business, while improving customer experience. Lean (too lean?) organizations risk reaching a negative tipping point with deeper staff reductions, jeopardizing business operations, client commitments and compliance mandates. Instead, millions in potential savings can be realized by shifting staff (as mentioned in 14 APM Predications for 2014) to roles focused on improving application performance and stability, while simultaneously driving bottlenecks and delivery inconsistencies from the infrastructure. Just as a sales person has to deliver revenue exceeding her compensation and business costs, IT teams, using APM, can provide savings covering their costs, plus so much more.
Mike Cuppett
Business Systems Performance Consultant

The Business needs to make sure that IT is aligned to its outcomes, and it’s not the tail wagging the dog. IT needs to show value for money in its operations; the best way to do this is keep the employees productive by keeping applications running optimally. APM gives you this.
Zubair Aleem
Managing Director, APMSolutions

As part of a performance analytics and decision support (PADS) framework, APM helps enterprises and service providers drive customer ROI objectives of reducing cost, enhancing productivity and generating incremental revenue streams.
Gabe Lowy
Technology Analyst, Tech-Tonics


Implementing an effective APM platform will become even more critical in 2014 because of the rapidly expanding array of applications which organizations will incorporate into their day-to-day operations in the coming year. These applications will, range from Cloud-based mobile enterprise applications to support key business functions to software that captures data collected from remote sensors and controls the operation of remote devices associated with the new world of the “Internet of Things”. Monitoring and measuring application performance in a centralized fashion to optimize the responsiveness and effectiveness of this fragmented application ecosystem will be essential to success.
Jeffrey Kaplan
Managing Director of THINKstrategies and Founder of the Cloud Computing Showplace


Applications no longer live on an Island – they are increasingly scattered, highly distributed and depend on a wide range of components working together. Getting applications to run optimally is an extremely difficult task. Unfortunately, inefficiencies or delays can exist anywhere in the app or along the delivery chain causing it to exhibit unique behaviors at various times and scale. APM can help identify the source of poor application performance with inefficiencies in the internal components of your applications and how they consume system resources.
Dimitri Vlachos
VP Marketing and Products, Riverbed

As applications become more widely distributed, performance problems are likely to be more common, not less common, but you won’t hear that from your xAAS vendors. The one solution you still have at your disposal is APM, as APM solutions have changed with the times and allow you to take charge and monitor application performance regardless of who is running your application or where it is being housed.
Jay Botelho
Director of Product Management, WildPackets


There has been a ton of hype surrounding the cloud over the years, but this year we’ll finally reach a point where concrete cloud use cases will become a reality. In fact, a recent SolarWinds survey found that enterprise IT pros now consider cloud computing to be the number one most “disruptive” technology. In this case, “disruptive” means causing significant challenges for IT pros who are trying to navigate potentially huge changes in how they do their jobs. The cloud adds an entirely new computing paradigm IT admins have to manage, ranging from SaaS application management to Amazon Web Services to website hosting. Each of these types of cloud implementation demands a different set of tools and skills that can overwhelm an IT department’s staff and resources. However, the thing that remains constant is that the application survives; it just runs in a different way, on different resources or with different management options. By focusing on getting deep visibility into application performance that is then linked to the underlying infrastructure, IT professionals will be able to remain relevant wherever applications are running.
Michael Thompson
Principal, Product Marketing, SolarWinds

While APM has been a nice-to-have solution for on-premise applications, the rapid adoption of cloud services has made APM a necessity as it’s easier than ever for customers to switch to a competitor’s offering. Monitoring applications that traverse both environments and troubleshooting their performance problems is an intricate challenge best solved by a modern APM solution that includes advanced analytics. The ability to automatically mine 100,000 simultaneous metrics in real time is key to quickly and accurately finding anomalies that may be indicators of impending issues.
Aruna Ravichandran
VP Product Marketing, CA Technologies

Everything about business is becoming on-demand, from the infrastructure we use, to the software we run, to the teams who manage it all. Companies are working to make the shift, but in the process, realizing that these highly flexible “black box” environments abstract a lot of detail from the people who need to make sure they are performing as expected. If you can’t effectively diagnose the problem, how can you ever fix (or improve) it? Today’s APM-as-a-Service solutions address this challenge for the enterprise – enabling speed to resolution, high service levels for customers and optimal performance, regardless of whether the application is contained completely in the cloud, in the closet or some combination of the two.
Chad Bockius
CEO, CopperEgg

Cloud pricing models, and the resultant proliferation of what have previously been seen as large enterprise applications, mean that more and more small and medium enterprises are availing of complex distributed solutions, but do not always have the resources required for their discovery, monitoring, and diagnostics. It is perhaps ironic that solutions to these issues can be found in the very technological advances that have contributed to them, in particular Cloud/APMSaaS.
Jim Young
Information Development Manager, IBM Cloud and Smarter Infrastructure

In 2014, APM can play a more powerful role than it has in the past. As applications continue to serve larger and more critical audiences, and deployment models move to where scalability and cloud converge, the integration of APM and cloud resource allocation will be an important factor. The intelligence collected by the APM solution can be used to drive, and ultimately automate, decisions about deployment patterns and optimization. This is where APM can not only inform us about application performance, but actually improve it.
Steve Rosenberg
GM, Performance Monitoring, Dell Software

The cloud offers a dynamic environment where you can spin up entire applications or just portions of applications. Cloud monitors can tell you how much cloud you are using, but you still need APM to tell you how your application is performing in and across the cloud by providing end-user experience monitoring and end-to-end diagnosis of problems.
Steve North
RTI Product Manager, OC Systems


Platform as a Service (PaaS) offers new ways to support and deliver applications by leveraging cloud technology. It still enables the same activities involved with development and deployment that we have always practiced in IT, but with the cost, agility and scalability benefits of the cloud. However, it’s important to remember that just because you are deploying your application in the cloud doesn’t mean you no longer have to worry about its performance. In fact, APM becomes much more important when it comes to PaaS because you have less control of the underlying infrastructure.
Karen Tegan Padir
CTO, Progress Software


In 2014, mission-critical applications like SAP, Oracle and Microsoft Dynamics will be virtualized at a faster pace than they have ever been. Along with the trend of virtualizing mission-critical applications, is the need for faster and more accurate diagnosis and troubleshooting so administrators can quickly understand where the bottleneck lies – whether a slowdown is in the application or in the virtualization platform. Today, every organization is looking to do more with less, so IT performance management will continue to be measured by business value. We are seeing that organizations will need to fully align performance management solutions with their IT strategy. Performance management solutions that enable IT agility, virtualization, end-user satisfaction and operations efficiency will be in demand during 2014 and beyond.
Srinivas Ramanathan
CEO, eG Innovations


While cloud and virtualization have added layers of complexity to IT architectures, the rise of software-defined data centers in particular is going to be a game changer over the next year as major players continue to push forward with these initiatives. Without insight into these emerging, hyper-dynamic architectures, adoption is likely to be slower than it might otherwise, or be at considerable risk. As organizations explore the benefits of software-defined data centers, cross-tier and architectural performance monitoring solutions that deliver the scale and visibility required to manage these environments will be in high demand.
Erik Giesa
SVP of Marketing & Business Development, ExtraHop


Why do you need APM in 2014? Mobile apps! Today’s APM strategy primarily focuses on monitoring web application transactions. With the rapid growth of mobile devices, businesses are starting to provide mobile versions (apps) of these applications. The proliferation of mobile business apps will force IT teams to monitor these mobile apps for availability, response time, and performance. APM for mobile apps will need to present an end-to-end transaction view of the application, starting from the mobile device to the transaction being executed on the application infrastructure. Such mobile APM tools will cater not only to IT operations and DevOps teams but also to the mobile apps development team by helping to ensure that performance and response are built into the product from design.
Sridhar Iyengar
VP Product Management, ManageEngine

By the end of 2014 the number of mobile Internet users will surpass the number of desktop Internet users, which means it is more important than ever to have an APM solution that can provide an end-to-end perspective on application performance. The challenge of delivering a quality mobile experience lies in the sheer variety of mobile devices (both form factors and operating systems) and bandwidth speeds. In 2014, APM tools will be required to have first class support for both iOS and Android, providing performance monitoring, crash reporting, correlation to server-side requests, and audience demographics.
Dustin Whittle
Developer Evangelist, AppDynamics

According to new data from comScore, one-third of all online and app traffic at the top 10 retailers is now mobile. We expect to see this trend accelerate in 2014 not only for commerce, but for sites and apps in all industries. This means that the network will have increased impact on application performance. To fully gage its impact and release well-performing applications, you need a robust preproduction APM solution complete with virtualized load, network and services. Otherwise, you risk jeopardizing not just the performance of one application, but the performance of your entire IT infrastructure as poor mobile performance disproportionately impacts all end users.
Bill Varga
COO, Shunra

With the growing adoption of mobility, enterprises will be moving business-critical functions to mobile apps. This means IT teams will have to ensure flawless execution of these apps, just like they do for web applications today. The complexities of mobile apps, however, are more pronounced as there are many devices/OS, varying carriers/networks and different Service/Cloud API’s, which makes mobile performance management challenging. Mobile APM addresses these issues and enables businesses to deliver superior user experience, which ultimately translates to increased user engagement, employee productivity and increased bottom line.
Jeannie Liou
Marketing Manager, Crittercism

13. BYOD

The BYOD tsunami in 2013 made great demands of our legacy infrastructure and this is set to continue in 2014. SDN might ultimately provide the answers, but for now networks need to cope with a tripling of the end user community in terms of connected devices. Ensuring application performance across multiple concurrent devices demands insight and network emulation used as a constituent part of APM means reality can be tested then improved. My advice: Know what can go wrong and what works best before going live – use APM and emulation NOW!
Jim Swepson
Pre-sales Technologist, iTrinegy


A quick read through the top stories from this year’s Consumer Electronics Show (CES) highlights the fact that workers today are more connected than ever before and the number of networked devices they’ll be bringing on premise is only going to climb in coming years. While a laptop and a smartphone used to encapsulate the network drawing power of most employees, today we are finding that the average worker is carrying with them more than four devices that will be tapping into your corporate network. Tablets, wearables, smart watches and other forms of technology are forcing companies to pay closer attention to the amount of bandwidth being consumed and making a dedicated APM solution a must have. Employees have become accustomed to using their personal devices and working the way they live. Because there is no turning back from this trend, it all comes down to how organizations react to and manage this new reality that will dictate whether they are successful or not.
Ennio Carboni
EVP Customer Solutions, Ipswitch

A number of new enterprise technologies have emerged that can put a tremendous strain on the existing network in terms of both bandwidth and performance, including enterprise-class Voice over IP (VOIP) telephony solutions, Virtual Desktop Infrastructure (VDI) solutions, and enterprise collaboration tools. As technologies such as software-defined networks (SDN) and hybrid private-public enterprise clouds become more prevalent, microbursts, timeouts and protocol errors, are likely to become more rather than less pronounced, while the source will also be harder to determine. To combat these disruptions, it is imperative for IT departments to get a handle on what’s going on in their networks through dedicated “network visibility fabrics” that provide instrumentation at key points in the network, exposing the full set of network packets that underlie these causative issues.
Mike Heumann
Sr. Director, Marketing (Endace), Emulex


You might be tempted to think that a significant increase in network speed, like a 10x improvement going from a 1G to a 10G distribution layer, would reduce the need for APM. After all, your network is now much faster, so your applications should be performing much better, right? But faster networks don’t always translate to faster application performance. Faster networks are like a bigger closet – you’ll always fill the available space. And very often faster networks are quickly filled by new data types and applications, like VoIP and telepresence, which take precedence over “traditional” application traffic due to the real-time nature of the traffic. So if a network upgrade is being driven by a demand for real-time services, it’s quite likely that you’ll need to pay even more attention to APM on your faster network, not less.
Jay Botelho
Director of Product Management, WildPackets

Thanks to APM Digest for the article.

Power Over Ethernet, Explained

A guide to PoE and why it’s a smart choice for clocks and other small devices

You may already know all about Power over Ethernet (PoE) technology. You may be well-versed in the low power consumption, cost-effectiveness, and ease of installation for the hundreds of modern devices that utilize the technology, such as VoIP phones, webcams and other devices including Inova’s OnTime digital clocks, analog clocks, and OnTrack X Series digital displays. But for those not familiar with PoE (read “P-O-E,” not Poe as in Edgar Allen), allow me to review the basics.

What is Power over Ethernet, Anyway?

Power over Ethernet technology is a network standard that allows various devices, such as Voice over IP (VoIP) telephones, wireless LAN access points, clocks, and digital signs to receive both power and data over existing LAN cabling. In 2003, PoE became an international standard, called IEEE 802.3af, as an extension to existing Ethernet standards.

There is no need to modify your existing Ethernet switch equipment or cabling to support PoE. Simply add a midspan power injector in a switch room or endpoint to inject power into the twisted pair LAN cables. PoE is fully compatible with both powered and non-powered 10/100BaseT Ethernet devices, featuring a “discovery process” specifically designed to prevent damage to existing Ethernet equipment.


Since no AC outlets are needed to power devices, PoE offers significant time and installation cost savings. In fact, when Purdue University powered more than 1,000 access points with PoE power in 2003, they reportedly saved between $350 to $1,000 per access point by eliminating labor costs from contracting an electrician to run wiring for new AC outlets. For a large project such as Purdue’s, the savings amount to hundreds of thousands of dollars or even millions. That’s savings you can take to the bank (and maybe even earn you a promotion, or at least a congratulatory pat on the back).

PoE switches also allow for uninterruptible power supply (UPS) backup. This means that PoE devices may continue to operate even throughout a power failure. The same can not be said for old fashioned serial-powered devices. In that case, if the power goes out…so do the devices (hope those devices weren’t too important to your organization, and you don’t mind manually resetting them one-by-one).

The IEEE 802.3af standard imposes a power limit of 15.4 watts, enough to operate small devices such as clocks and digital signs without sacrificing LED brightness or device quality. This low power restriction ensures that PoE devices are “green” and energy-efficient in a world moving quickly in that direction.

Overcoming Sticker Shock

A common misconception among PoE clock purchasers is that the clocks found in department stores for home use are the same clocks used in a large organization or school system. Therefore, when the novice shopper sees the price tag of a clock powered by Ethernet, they usually suffer from momentary sticker shock. “You want me to pay a few hundred dollars for a clock?”

The difference is that battery-operated clocks use outdated technology and would be an extremely inefficient choice for a large organization. Imagine having to set every clock individually, monitor each one for battery replacements, manually adjust for Daylight Saving Time and continuously check for synchronization. I’m exhausted just thinking about it.

Serial (AC-powered) clocks are hardly a better alternative. You’d need to install an AC outlet at every clock location, which typically costs a few hundred dollars itself for installation and wiring. And at the end of the day, you still wouldn’t have a flexible infrastructure if the needs of the building change or if you want to add on other IP devices.

Now we arrive at the option that makes the most sense – Ethernet-powered clocks. PoE clocks receive both power and data from your organization’s network and are perfectly synchronized, even across disparate locations connected by the same network. Since PoE clocks require no AC outlet installation, you’ve saved a few hundred dollars right there. Suddenly the original price doesn’t seem so bad, does it?

Next Steps

Now that you’re aware of the benefits of Power over Ethernet technology and why it is the best choice for a synchronized clock system, you’ll likely need to get sign-off from management. Arm yourself with the following little facts about PoE clocks, and they can’t say no:

  1. PoE clocks use minimal wattage, and are therefore “green” and energy-efficient.
  2. We are already set up to utilize PoE, with our existing Ethernet infrastructure. There is no need to modify our switches or cabling. PoE won’t damage our existing equipment.
  3. Clocks that get time updates from our own network will be completely synchronized. Synchronized time is crucial for security systems and efficient operations.
  4. PoE allows for centralized UPS backup, which will allow the clocks and other Ethernet-powered devices to operate even through a power failure.
  5. We can avoid the manual maintenance involved with battery-powered clocks, and the hefty installation costs involved with serial clocks. PoE clocks automatically update for Daylight Saving Time.
  6. PoE devices are protected by the same measures that keep our network secure.
  7. We can easily add additional clocks or other PoE devices to our network in the future. This would be a costly and cumbersome endeavor with serial-powered devices.

Thanks to Inova Solutions for the article.

Optimize Customer Service Experience

Many people believe they are best served by real people, not by voice robots. That’s the rationale behind GetHuman.com. But the economics and utility of self-service as an alternative to live agent interactions are so compelling that self-service solutions are here to stay.

Providing multiple touchpoints is a huge technology investment. Technology is great, but you can’t just diligently manage the implementation process and then assume all is well with the customer service experience. Because nothing’s static in this world it’s extremely important to confirm from your customers’ perspective that your contact center technology really is capable of delivering the experience you intend, one that defends your brand promise.

In 17 years of supporting clients through all phases of the contact center lifecycle, we’ve learned many lessons about how to best evaluate and optimize the Customer Service Experience (CSE) that is the foundation of delivering your brand promise. This article introduces a process that ensures the contact center technologies are in fact offering the customer service experience you intend, one that delivers on your brand promise.


IQ Services Virtual Customer Contact Centre Testing SolutionsINTRODUCING VC101®

VC101® is a proven process that ensures the customer service experience delivered is aligned with the intentions of the Customer Experience & Brand Management teams because its first step is identifying key customers and defining how they will interact with the contact center technology you put in place. By doing so, VC101® goes beyond using only internal metrics that confirm everything is Working As Designed (WAD) to monitor & measure actual customer service experience as it’s delivered.

Once you have actual Customer Service Experience data, you can create a feedback loop by tweaking your systems and observing impact on the actual CSE delivered, not just on internal metrics such as CPU time or QoS.

And when you know the service experience delivered by your contact center technologies defends your brand standards, you can also be confident the experience delivered increases loyalty and creates advocates.


VC101® is a multistep process that first defines and then deploys Virtual Customers (VCs) to perform real end-to-end transactions for the purpose of evaluating application and technology performance related to Customer Service Experience impact.


Virtual Customers are automated processes that follow test case scripts to interact with the Contact Center just like real customers performing real transactions.


Once the VCs are defined and the rampup and rollout plans are drawn up, the VCs are deployed. Key considerations in deploying VCs include:

  • Risk analysis and consequences
  • Selection of the right VC interactions
  • Clearly defined availability and performance objectives and metrics
  • Benchmark assessment
  • Reporting and notification criteria

IQ Services Virtual Customer Contact Centre Testing SolutionsWHAT IS CSE OPTIMIZATION?

  • A process for deploying VCs to collect data that can be used to evaluate and improve business solution performance relative to defined objectives and metrics
  • May involve identification and integration of tools and services not provided by IQ Services
  • An iterative process that tunes the CSE as it’s delivered.


Properly implemented, VC101® is a critical element of an integrated continuous improvement process that hones & perfects a customer service experience that defends brand promise, thereby positively impacting key customer service metrics such as customer effort, customer loyalty, and net promoter score. Experiences that defend brand promise ultimately have bottom line impact, resulting in reduced total cost of operation and achievement of intended ROI.

Thanks to IQ Services for the article.

Eastlink to complete 100% broadband coverage of Nova Scotia this year

Telnet Networks- Managing Network PerformanceCableco and mobile provider Eastlink has until the end of this year to provide broadband internet services to 100% of residential and business premises in Nova Scotia, the provincial government’s rural development minister Michel Samson said on Thursday. As reported by Metronews.ca, Eastlink’s contract with the government to implement the rural rollout expires this December, and while around 99% of the Atlantic Canadian province’s inhabitants already have high speed access, there remain approximately 1,000 unserved homes and businesses, mostly in relatively remote locations. Of the total CAD75 million (USD67.5 million) Nova Scotia rural broadband programme budget set in 2006, CAD41 million came from service providers, CAD20 million from the provincial authority and CAD14 million from federal state funds.

Thanks to TeleGeography for the article.

CRTC launches wholesale mobile sector review

Telnet Networks- Managing Network PerformancdeThe Canadian Radio-television and Telecommunications Commission (CRTC) has launched a public consultation to review whether the wholesale mobile wireless services market is sufficiently competitive, and the prospects for competition in the future. The CRTC is inviting comments on:

  • the state of the market for wholesale mobile wireless services, including wholesale roaming and wholesale tower sharing in Canada;
  • the impact that the wholesale mobile wireless services market has on the retail market; and
  • whether greater regulatory oversight would be appropriate if it were to find that the wholesale mobile wireless services market is not sufficiently competitive.

Comments may be submitted until 1 May 2014. The CRTC will also hold a public hearing beginning on 29 September 2014.

Thanks to TeleGeography for the article.

EMA and IXIA Webinar: “Best Practices for Building Scalable Visibility Architectures”

EMA IXIA WebinarNetwork performance, application performance, and security disciplines have reached mission-critical status for enterprises of all sizes and industries. While each certainly has its own unique technical aspects, all three disciplines share at least one common technology need – flexible, scalable access to network packet streams for monitoring and analysis purposes. A growing number of IT organizations are turning towards visibility architectures to meet the need, by deploying network visibility controllers (NVCs, a.k.a. network monitoring switches or network packet brokers) as means of controlling and assuring effective and cost-efficient assurance of networks and applications.

Join EMA Vice President of Research, Jim Frey, and Ixia Senior Director, Product Management, Scott Register, for a Webinar presentation and discussion where you will learn:

  • Key goals and objectives of a visibility architecture
  • Ways in which NVCs are being used, both today and in the future
  • NVC features and capabilities having the broadest impact and delivering the most value
  • Architectural and administrative qualities that are making the most difference
  • Impact of server and network virtualization technologies on technology and product choices
  • Gain deep and valuable insight about your network

Eight 700MHz licence winners spend CAD5.3bn

Telnet Networks- Managing Network PerformanceCanada’s Minister of Industry James Moore yesterday (19 February 2014) announced the conclusion of the country’s 700MHz 4G mobile broadband spectrum licence auction. 97 regional licences were awarded to eight companies in the auction which finished on 13 February, meeting the government’s aim to license at least four wireless players in every Canadian region. Total revenue generated from the 700MHz auction was CAD5.27 billion (USD4.80 billion), the highest return ever for a wireless auction in Canada, beating the AWS 2100MHz spectrum auction in 2008 which raised CAD4.3 billion. The relative value of spectrum in Ottawa’s 700MHz sale was also higher compared to the 2008 700MHz licence auction across the border in the US, which gleaned USD19.1 billion – or quadruple the Canadian revenue to cover nine-times the population, TeleGeography notes. Moore said in a speech that Canadian operators obtaining licences will be able to start deploying 700MHz services in mid-April 2014.

The eight winners are listed below (number of paired/unpaired spectrum licences; price paid; licence population covered [of a total population of roughly 35 million]):

Rogers (22 paired; CAD3.292 billion; 33,368,699);

Telus (16 paired + 14 unpaired; CAD1.143 billion; 33,475,914);

Bell (17 paired + 14 unpaired; CAD565.7 million; 33,475,914);

Videotron (7 paired; CAD233.3 million; 28,020,943);

Bragg (Eastlink) (4 paired; CAD20.3 million; 3,101,204);

MTS (1 paired; CAD8.8 million; 1,206,968);

SaskTel (1 paired; CAD7.6 million; 1,039,584);

Feenix Wireless (100%-owned by Mobilicity chairman John Bitove, whose Obelysk investment firm owns a majority voting share and minority equity share in Mobilicity) (1 paired; CAD284,000; 107,215).

By province, 700MHz licences were awarded as follows:

Newfoundland & Labrador – Bell, Eastlink, Rogers, Telus;

Nova Scotia – Bell, Eastlink, Rogers, Telus;

Prince Edward Island – Bell, Eastlink, Rogers, Telus;

New Brunswick – Bell, Eastlink, Rogers, Telus;

Quebec – Bell, Rogers, Telus, Videotron;

Ontario – Videotron [south Ontario only], Eastlink [north Ontario only], Bell, Rogers, Telus;

Yukon, Northwest Territories and Nunavut – Bell, Feenix, Telus;

Manitoba – MTS, Bell, Rogers, Telus;

Saskatchewan – SaskTel, Bell, Rogers, Telus;

Alberta – Bell, Rogers, Telus, Videotron;

British Columbia – Bell, Rogers, Telus, Videotron.

Biggest spender Rogers announced that its new 700MHz spectrum covers 99.7% of the Canadian population, with two blocks of contiguous, paired spectrum located in key rural and urban locations across Canada. Specifically, Rogers acquired the A and B 12MHz blocks in Southern Ontario, Eastern Ontario, Southern Quebec, Eastern Quebec, British Columbia, Alberta, Newfoundland, Nova Scotia and New Brunswick. Rogers also acquired 12MHz of C block spectrum in Northern Quebec, Northern Ontario, Manitoba and Saskatchewan. Rogers added that its cash investment of CAD3.29 billion was ‘in line with recent spectrum transactions’ in the US, where the price range for prime 700MHz spectrum in major states has been in excess of USD4.00 per MHz/population, adding that in the 2008 US 700MHz auction, the top 25 markets sold for USD4.50 per MHz/population, while by comparison Rogers has paid CAD4.32 per MHz/population for the major markets across Canada, securing two blocks of paired spectrum with licence terms that are 33% longer than comparable 700MHz concessions in the US.

The CEO of second largest 700MHz investor Telus, Darren Entwistle, said the addition of the 700MHz spectrum will enable Telus to expand its LTE coverage into rural areas, extending its national 4G LTE footprint from the current 80% population coverage to 97% ‘well in advance of the auction’s build requirements’. ‘Moreover, the spectrum will enable us to further enhance our coverage in urban areas, adding much needed capacity,’ he added.

Wade Oosterman, president of Bell Mobility, announced that Bell already offers LTE services to 82% of the population, and the new 700MHz spectrum will allow Bell to expand LTE coverage to 98%, by launching services in smaller towns, rural locations and remote communities across the country including the Far North.

Perhaps the most notable winner in the 700MHz auction was Quebecor subsidiary Videotron, as the Quebec-based quadruple-play operator expanded its reach outside its home province to Ontario, Alberta and British Columbia, as well as buying additional spectrum in Quebec. Videotron announced its satisfaction in gaining access to potential coverage of 80% of Canadians for a significantly lower price than its larger cellular rivals.

Two qualified bidders did not win any 700MHz licences, namely TBayTel and Novus Wireless, while Wind Mobile withdrew from the 4G auction shortly before it commenced on 14 January 2014.

Thanks to TeleGeography for the article.

Canada budgets further CAD305m for rural broadband

cropped-telnet-networks-new-red-low-res1Canada’s federal government has promised an additional CAD305 million (USD277.5 million) over five years to expand and upgrade broadband internet services in rural and Northern communities, reports IT World Canada. Finance minister James Flaherty’s new budget sets a goal of giving 280,000 more rural households access to download speeds of at least 5Mbps, as per a five-year target established in 2011 by the Canadian Radio Television and Telecommunications Commission (CRTC). Ottawa spent CAD225 million in 2010-13 to upgrade services to dozens of communities which previously had either minimal internet speeds or no internet access at all.

Thanks to TeleGeography for the article.

Telus overtakes Bell as second largest mobile operator by users

Telnet Networks- Managing Network PerformanceCanadian quadruple-play telco Telus has posted a 3.6% year-on-year increase in EBITDA to CAD951 million (USD865 million) in the fourth quarter of 2013, on consolidated quarterly revenues which climbed 3.4% to CAD2.948 billion in Q4. Notable in Telus’ end-of-year results was the fact that it narrowly surpassed Bell Mobility as Canada’s second largest mobile operator by users, with its total wireless subscribers standing at 7.807 million at 31 December 2013, compared to 7.670 million a year earlier (exceeding Bell’s figure by 29,000). Telus’ fixed broadband customer base grew by 5.2% in 2013 to 1.395 million, and its pay-TV customers, largely based on IPTV, rose by 20.2% to 815,000.

Thanks to TeleGeography for the article.