Be Confident Your Contact Center Technology Delivers The Brand You Promise

Be Confident Your Contact Center Technology Delivers The Brand You PromiseToday, most B2C interactions involve some form of contact center technology. With the exception of in-store purchases, contact center technology is responsible for providing the vast majority of brand impressions that customers have through experiences with toll-free numbers, self-service IVR, and CTI screen pop—and that’s just one of the channels. There are also self-service websites, apps, social media scrapers, blended queuing processes, and more.

SERVICE DELIVERED VS. SERVICE INTENDED

VOC programs ask customers for feedback on the experience as they remember it – which is extremely important when determining if the experience delivered was pleasing, useful, efficient, or memorable. But it’s also critical to monitor the experience delivered by the organization’s technology and compare it to what was intended. Customers don’t know whether or not the technology with which they’re interacting is actually doing what it’s intended to do, they just know whether or not it’s available when they want to use it and lets them get what they need quickly and efficiently.

CUSTOMER VS. TECHNOLOGY PERSPECTIVE

Data that’s easy to collect with regularity is what’s frequently relied upon for decision making and tuning—the data collected inside the contact center related to congestion, CPU consumed, call arrival rate, etc. Accurate, unemotional, precise data about the experience actually delivered has to come from the outside in, the way customers really access and interact with technology, and be collected in a controlled fashion on a regular basis.

IS THERE A BETTER, MORE STRATEGIC WAY TO DO THIS?

There are ways to gain a true assessment of the customer service experience as delivered. It starts with documented expectations for the experience as it’s supposed to be delivered. Defined expectations establish functionality, performance, and availability specs as benchmarks for testing.

It’s in every company’s best interest to know that the technology it put in place is, first of all, capable of making all those connections it has to make, and then actually delivering the customer service experience that it intended to deliver. To do that, you need reliable data gathered from automated, outside-in, scripted test interactions that can be used to assess the functionality that’s been put in place, as well as the technology’s ability to deliver both at peak load and then continuously once in production.

Think of automated testing as using an army of secret shoppers to access and exercise contact center technology exactly as it’s intended to be used: one at a time, then hundreds, thousands, tens of thousands of virtual customer secret shoppers. Think also about the technology lifecycle: development – cutover – production – evolution.

Start with automated feature/function testing of your self-service applications— voice and web—to ensure that what was specified actually got developed. Precisely verify every twist and turn to ensure you are delivering what you intended. Do that before you go live and before you do unit testing or load testing. Using an automated, scripted process to test your self-service applications ensures you have reliable discrepancy documentation—recordings and transcripts—that clearly document functionality issues as they are identified.

Next step, conduct load testing prior to cutover. Use automated, scripted, virtual customer test traffic, from the outside-in, through the PSTN and the web—to ensure your contact center technology really can perform at the absolute capacity you’ve designed and also at the call arrival and disconnect rates that are realistic. Plan failure into the process—it’s never one and done. Leave enough time to start small, to identify and address issues along the way.

Once in production, continuously access and exercise contact center technology just like real customers, through the PSTN and through the web—to ensure it’s available, assess its functionality, and measure its performance so you can be confident that technology is delivering the intended experience, 24×7.

If you have self-service applications that undergo periodic tweaks or enhancements, automated regression tests using those initial test cases will ensure all functionality is still on task after tweaks to the software or underlying infrastructure.

WHAT’S THE NET RESULT OF TESTING THIS WAY?

VOC feedback tells you what customers think about your efforts and how they feel about dealing with your brand and technology. Automated testing and monitoring allow you to determine if the interfaces you’ve provided are, in fact, providing the easy-to-use, low-effort experience you intend.

It’s pretty straightforward—quality and efficiency of experience drive loyalty, and loyalty drives spend. You HAVE to have both perspectives as you tweak the technology, and that’s the message. Listen to your customers as you decide what you want your technology to do, and then make sure your technology is doing what you intend it to do.

Originally Published in Speech Technology Magazine.

Embracing Change in Contact Centres

Embracing Change in Contact Centres

Ottawa Regional Contact Centre Association Presents:

Embracing Change in Contact Centres

June 11, 2015: Kanata Recreation Complex
100 Walter Baker Place, Ottawa
1:30 to 4:00 pm

People naturally resist change. In this Change Management session hosted by ORCCA, you will discover that there are predictable reactions to change and that resistance can be reduced if it is identified and addressed before it sets in. How can your team embrace the changes required to improve? Managers and frontline staff must step out of their comfort zones to develop and accept coaching support, metrics and quality assessments that drive the customer experience. This interactive session will provide insights into change management methodology explore today’s millennial employee and provide an understanding of how change and culture are integral to evolve as a high performance contact centre.

Our Speaker Moosha Gulycz

Moosha is a founding Partner of AtFocus Inc. She developed and regularly delivers ‘Countdown2Change’, an AtFocus proprietary Organizational Change Management program. The program focuses on the personal journey of change and the natural resistance to change associated with organizational change projects. ‘Countdown2Change’ methodology has contributed to significant cultural change.

Moosha has over 15 years consulting experience, advising many organizations on change management strategies, service delivery improvements, process assessments and quality performance. She has authored, co-authored and contributed to the following books:

  • A Journey to Personal and Professional Success
  • Performance Driven CRM: How to Make Your Customer Relationship Management Vision a Reality
  • Customer Relationship Management: A Strategic Imperative in the World of e-Business

Attend this thought-provoking Change Management session hosted by ORCCA

Register early, space is limited
RSVP to by email info@callcentres.org

Thursday, June 11, 2015
Kanata Recreation Complex, 100 Walter Baker Place
1:30 to 4:00 pm
Free onsite parking
Free to ORCCA members
Non-Members – $30.00 in advance payable by Visa or MasterCard

Visit our website at www.callcentres.org

 

How Wireless Clocks Assist with a Facility’s Safety

How Wireless Clocks Assist with a Facility’s SafetySupervising and managing a large facility is no walk in the park, especially when you begin to list the countless efforts that must all come together unceremoniously to make sure operations of the facility run efficiently. Large facilities, like hospitals, schools and universities, or businesses, need numerous amounts of these tasks to all come together. Many little things that people take for granted, such as waste management and maintenance work are vital to daily operations.

Perhaps one of the most important jobs of a facility manager, however, is the maintenance of a building’s security system. The safety of a building, and more importantly the people within the buildings, is a task that is always on top of the to-do list. A lack of preparation or awareness could cause catastrophes for not only the business but harming of the safety of the people that liven up the facility. In addition to security systems, another great product installation that aides in safety is a Sapling wireless clock system. Not only will the building operate more efficiently through synchronized time, the building’s security system will become considerably simpler to manage.

A Sapling wireless clock system attains its accurate time from a master clock, which transmits the signal to each individual clock within the system. Furthermore, the master clock is able to automate security systems throughout the building. The master clock has scheduling capabilities which allow a facility manager to consolidate the building’s security system with the wireless clock system. This alleviates the duties of setting alarms or locking doors from humans, which can be subject to error.

Adding a Sapling wireless synchronized clock system into your operations will greatly increase your facility’s and employee’s safety. No short cuts can be taken when people’s safety is the subject matter; there is no value that can be placed on human safety. A Sapling wireless clock system not only improves operational efficiency, but can help increase the high valued and much needed safety and peace of mind for all involved.

Thanks to Sapling for the article.

Tracking Web Activity by MAC Address

Tracking Web Activity by MAC AddressTracking web activity is nothing new. For many years IT managers have tried to get some sort of visibility at the network edge so that they can see what is happening. One of the main drivers for this is the need to keep the network secure. As Internet usage is constantly growing, malicious, phishing, scamming and fraudulent sites are also evolving.

While some firewalls and proxy servers include reporting capabilities, most are not up to the job. These systems were designed to block or control access and reporting was just added on at a later date. Server log files do not always have the answer either. They are meant to provide server administrators with data about the behaviour of the server, not what users are doing on the Internet.

Some vendors are pitching flow type (NetFlow, IPFIX, etc…) tools to address the problem. The idea is that you get flow records from the edge of your network so you can see what IP address is connecting to what. However, as with server logs, NetFlow isn’t a web usage tracker. The main reason for this is that it does not look at HTTP headers where a lot of the important information is stored.

One of the best data sources for web tracking is packet capture. You can enable packet capturing with SPAN\mirror ports, packet brokers, TAPs or by using promiscuous mode on virtual platforms. The trick is to pull the relevant information and discard the rest so you don’t end up storing massive packet captures.

Relevant information includes things like MAC address, source IP, destination IP, time, website, URI and username. You only see the big picture when you have all of these variables in front of you.

Tracking Web Activity by MAC Address

Why track Internet activity?

  • Root out the source of Ransomware and other security threats. Track it down to specific users, IP addresses or MAC addresses
  • Maintain logs so that you can respond to third party requests. Finding the source of Bittorrent use would be a common requirement on open networks.
  • Find out why your Internet connection is slow. Employees watching HD movies is a frequent cause.
  • Out-of-band network forensics for troubleshooting or identifying odd network traffic.

Customer Use Case

End user is a very large airport in EMEA. Basic requirements and use case is tracking web activity, keeping a historical record of it for a period of one year, and because most of the users are just passing through (thousands of wireless users every hour!) the only way to uniquely identify each user or device is by MAC address.

Luckily for us, because the LANGuardian HTTP decoder captures and analyses wire data off a SPAN or mirror port it can easily track proxy or non-proxy traffic by IP or MAC address. The customer can also drill down to URI level when they need to investigate an incident. For them LANGuardian is an ideal solution for tracking BYOD activity as there are no modifications to the network and no agents, clients or logs required.

The MAC address variable is an important one when it comes to tracking devices on your network. Most networks use DHCP servers so you cannot rely on tracking activity based on IP addresses only. MAC addresses are unique per device so they will give you a reliable audit trail as to what is happening on your network.

Thanks to NetFort for the article.

What if Sony Used Ixia’s Application and Threat Intelligence Processor (ATIP)?

Trying to detect intrusions in your network and extracting data from your network is a tricky business. Deep insight requires a deep understanding of the context of your network traffic—where are connections coming from, where are they going, and what are the specific applications in use. Without this breadth of insight, you can’t take action to stop and remediate attacks, especially from Advanced Persistent Threats (APT).

To see how Ixia helps its customers gain this actionable insight into the applications and threats on their network, we invite you to watch this quick demo of Ixia’s Application and Threat Intelligence Processor (ATIP) in action. Chief Product Officer Dennis Cox uses Ixia’s ATIP to help you understand threats in real time, with the actual intrusion techniques employed in the Sony breach.

ATI_Video

Additional Resources:

Ixia Application and Threat Intelligence Processor

Thanks to Ixia for the article.

Load Balancing Your Security Solution for Fun and Profit!

Maximizing the Value and Resiliency of Your Deployed Enterprise Security Solution with Intelligent Load Balancing

Correctly implementing your security solution in the presence of complex, high-volume user traffic has always been a difficult challenge for network architects. The data in transit on your network originates from many places and fluctuates with respect to data rates, complexity, and the occurrence of malicious events. Internal users create vastly different network traffic than external users using your publically available resources. Synthetic network traffic from bots has very recently ousted real users as the most prevalent creators of network traffic on the internet (http://www.dmnews.com/privacy/bots-overtake-people-on-the-web/article/414822). How do you maximize your investment in a security solution while gaining the most value from the deployed solution? The answer is intelligent deployment through realistic preparation.

Let’s say that you have more than one point of ingress and egress into your network, and predicting traffic loads it is very difficult (since your employees and customers are global). Do you simply throw money at the problem by purchasing multiple instances of expensive network security infrastructure that could sit idle at times and then get saturated during others? A massive influx of user traffic could overwhelm your security solution in one rack, causing security policies to not be enforced, while the solution at the other point of ingress has resources to spare.

High speed inline security devices are not just expensive—the more features you enable on them the less network traffic they can successfully parse. If you start turning on features like sandboxing (which spawns virtual machines to deeply analyze potential new security events) you can really feel the pain.

I solved this dilemma by load balancing multiple inline Next Generation Firewalls (NGFW) into a single logical solution with an Ixia xStream40, and then attacked them with Ixia BreakingPoint to prove they work even during peak network traffic.

I took two high end NGFWs, enabled nearly every feature (including scanning traffic for attacks, identifying user applications, and classifying network security risk based on the geolocation of the client) and load balanced the two devices with my Net Optics xStream40 solution. The xStream 40 has 48x 10GbE ports and 4x 40GbE ports. With this solution I can load balance up to 24 inline security devices at speeds of over 500Gbps. Then I took an Ixia BreakingPoint traffic generator and created all of my real users and a deluge of different attack scenarios.

At the end of a week of testing, I proved that my combined security devices maximized the dollars spent while maintaining the security posture they advertised. Below are a few of the more interesting tests that I ran and their results.

Scenario One: Traffic Spikes

Your 10GbE NGFW will experience inconsistent amounts of network traffic. It is crucial to be able effectively inforce security policies during such events. In the first test I created a baseline of 8Gbps of real user traffic, then introduced a large influx of traffic that pushed the overall volume to 14Gbps. The xStream 40 load balancer ensured that the traffic was split between the two NGFWs evenly, and all of my security policies were enforced.

Load Balancing Your Security Solution for Fun and Profit!

Figure 1: Network traffic spike

Scenario Two: Endurance Testing

Handling an isolated event is interesting, but maintaining security effectiveness over long periods of time is crucial for a deployed security solution. In the next scenario, I ran all of the applications I anticipated on my network at 11Gbps for 60 hours. The xStream 40 gave each of my NGFWs just over 5Gbps of traffic, allowing all of my policies to be enforced. Of the 625 million application transactions attempted throughout the duration of the test, users enjoyed a 99.979% success rate.

Load Balancing Your Security Solution for Fun and Profit!

Figure 2: Applications executed during 60 hour endurance test

Scenario Three: Attack Traffic

Where the rubber meets the road for a security solution is during an attack. Security solutions are insurance policies against network failure, data exfiltration, misuse of your resources, and loss of reputation. I created a 10Gbps baseline of the user traffic (described in Figure 2) and added a curveball by launching 7261 remote exploits from one zone to another. Had these events not been load balanced with my xStream 40, a single NGFW might have experienced the entire brunt of this attack. The NGFW could have been overwhelmed and failed to inforce policies. The NGFW might have been under such duress mitigating the attacks that legitimate users would have been collateral damage of the NGFW attempting to inforce policies. The deployed solution performed excellently, mitigating all but 152 of my attacks.

Concerning the missed 152 attacks: the Ixia BreakingPoint attack library contains a comprehensive amount of undisclosed exploits. That being said, as with the 99.979% application success rate experienced during the endurance test, nothing is infallible. If my test worked with 100% success, I wouldn’t believe it and neither should you.

Load Balancing Your Security Solution for Fun and Profit!

Figure 3: Attack success rate

Scenario Four: The Kitchen Sink

Life would indeed be rosy if the totality of a content aware security solution was simply making decisions between legitimate users and known exploits. For my final test I added another wrinkle. The solution also had to deal with large volume of fuzzing to my existing deluge of real users and attacks. Fuzzing is the concept of sending intentionally flawed network traffic through a device or at an endpoint with the hopes of uncovering a bug that could lead to a successful exploitation. Fuzzed traffic can be as simple as incorrectly advertised packet lengths, to erroneously crafted application transactions. My test included those two scenarios and everything in between. The goal of this test was stability. I achieved this by mixing 400Mbps of pure chaos via Ixia BreakingPoint’s fuzzing engine, with Scenario Three’s 10Gbps of real user traffic and exploits. I wanted to make certain that my load-balanced pair of NGFWs were not going to topple over when the unexpected took place.

The results were also exceptionally good. Of the 804 million application transactions my users attempted, I only had 4.5 million go awry—leaving me with a 99.436% success rate. This extra measure of maliciousness only changed the user experience by increasing the failures by about ½ of a percent. Nothing crashed and burned.

Load Balancing Your Security Solution for Fun and Profit!

Figure 4: Application Success rates during the “Kitchen Sink” test

Conclusion

All four of the above scenarios illustrate how you can enhance the effectiveness of a security solution while maximizing your budget. However, we are only scratching the surface. What if you needed your security solution to be deployed in a High Availability environment? What if the traffic your network services expand? Setting up the xStream 40 to operate in HA or adding additional inline security solutions to be load balanced is probably the most effective and affordable way of addressing these issues.

Interested in seeing a live demonstration of the xStream 40 load balancing attacks from Ixia BreakingPoint over multiple inline security solutions? We would be happy to show you how it is done.

Additional Resources:

Ixia visibility solutions

Ixia xStream 40

Thanks to Ixia for the article.

Cellcos’ Appeal Against Wireless Code of Conduct Rejected

The Federal Court of Appeal has rejected a bid by Canada’s main mobile operators to delay the full implementation of the country’s ‘wireless code of conduct’, the Financial Post reports. The code was introduced by regulator CRTC in June 2013 to provide better consumer protection against high mobile roaming charges and wireless contract cancellation fees. A group of cellcos including the three nationwide network operators Rogers, Telus and Bell (BCE Inc) launched legal action last July after raising concerns that some provisions of the code would apply retroactively to all of their customers once fully implemented. However, Justice Denis Pelletier ruled the CRTC ‘has the right to make the wireless code applicable to contracts concluded before the code came into effect.’

Thanks to TeleGeography for the article.

IT Heroes: App Troubles in the Skies

IT Heroes: App Troubles in the Skies

Each and every day, the bold men and women of IT risk it all to deliver critical applications and services. Their stories are unique. Their triumphs inspire. Learn how the US Air Force applies the intelligence gained from the Observer Platform in field testing applications, before deployment, to ensure peak performance on the battlefield and how it could do the same for you.

Hiding in Plain Sight

Nestled in the Florida panhandle, Eglin Air Force Base is about 100 miles from the Mississippi border as the crow flies. Its location to the storied Gulf Coast, near sleepy retirement towns like Destin and Fort Walton Beach belies the fact that the 725 square mile base is home to the 46th Test Squadron.

During a Joint Expeditionary Force Experiment (JEFX), the 46th Test Squadron’s Command and Control Performance Team Lead, Lee Paggeot was on hand to make sure that hundreds of participants and myriad weapons, vehicles, and other devices stayed connected.

New Technology Revealed

As part of the JEFX experiment, Paggeot’s team focused on air-to-air communication, specifically what would be revealed as the Battlefield Airborne Communications Node. The result, they hoped, would be a flying gateway between multiple military communications networks, enabling increased coordination between forces.

“We had hundreds of systems – tons and tons of servers in a massive configuration,” says Paggeot who employed the Observer Performance Management Platform to closely monitor the sensitive airborne network.

A Costly Test

Paggeot’s secret weapon, the Observer Platform was key to ensuring the success of the expensive experiments.

“Sometimes we’d lose the event,” says Paggeot, remembering the days when network problems were far more difficult and costly to solve. “We would have spent thousands of dollars, a hundred thousand dollars. We wouldn’t know until the last day that it was a multicast broadcast storm. Now if that happens at any of our events, we know in minutes.”

Find out how this IT Hero helped the U.S. military prepare for a historic wartime event, while detailing technical deficiencies to resolve IT issues faster – 60 times faster.

Get the full Eglin Air Force Base Study Now.

Thanks to Network Instruments for the article. 

 

End User Experience Testing Made Easier with NMSaaS

End user experience & QoS are consistently ranked at the top of priorities for Network Management teams today. According to research over 60% of companies today say that VoIP is present in a significant amount of their networks, this is the same case with streaming media within the organization.

End User Experience Testing Made Easier with NMSaaS

As you can see having effective end user experience testing is vital to any business. If you have a service model, whether you’re an actual service provider like a 3rd party or you’re a corporation where your IT acts as a service provider you have a certain goal. This goal is to provide assured applications/services to your customers at the highest standard possible.

The success of your business is based upon your ability to deliver effective end user experience. How many times have you been working with a business and have been told to wait because the businesses computers systems were “slow”. It is something which we all have become frustrared with in the past.

End User Experience Testing Made Easier with NMSaaS

To ensure that your organization can provide effective and successful end user experience you need to be able to proactively test your live environment and be alerted to issues in real time.

This is comprised of 5 key elements:

1) Must be able to test from end-to-end

2) Point to Point or Meshed testing

3) Real traffic and “live” test, not just “ping” and trace route

4) Must be able to simulate the live environments

  • Class of service
  • Number of simultaneous tests
  • Codecs
  • Synthetic login/query

5) Must be cost effective and easy to deploy.

NMSaaS is able to provide all of these service at a cost effective price.

If this is something you might be interested in, or if you would like to find more about our services and solutions – why not start a free 30 day trial today?

End User Experience Testing Made Easier with NMSaaS

Thanks to NMSaaS for the article.

Infosim® Global Webinar Day May 28th, 2015

Infosim® Global Webinar Day May 28th, 2015

The Future of Network Performance Management:

How to take advantage of Automation, Cloud and IoT!

Join Dr. Stefan Köhler, CEO Infosim and a leading authority in the network management field for a Webinar on The Future of Network Performance Management.

This Webinar will provide insight into:

  • Next Generation Automated Fault Management
    • How to fix your network issues fast and efficiently using automation?
    • How to get to the next level of Root Cause Analysis (RCA) with Dynamic Rule Generation (DRG)?
  • Performance Visibility in the Cloud
    • How to measure KPIs of your cloud services?
    • How to monitor SLAs of 3rd party SaaS solutions?
  • Internet of Things (IoT)
    • How to orchestrate a huge number of devices?
    • How to provision a service to a huge number of devices?

Find out how to prove that “It’s NOT the network, Dummkopf!” and register today to reserve your seat in the desired time zone:
A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.