Quality Automation: 3 Ways to Make Self-Service Work for Customers

Quality Automation: 3 Ways to Make Self-Service Work for CustomersIn the eyes of many customers, self-service is not a compound word but rather a four-letter one. It’s not that there’s anything inherently bad about IVR or web self-service applications – it’s that there’s something bad about most contact centers’ efforts to make such apps good.

Relatively few contact centers extend their quality assurance (QA) practices to self-service applications. Most centers tend to monitor and evaluate only those contacts that involve an interaction with a live agent – i.e., customer contacts in the form of live phone calls or email, chat or social media interactions. Meanwhile, no small percentage of customers try to complete transactions on their own via the IVR or online (or, more recently, via mobile apps) and end up tearing their hair out in the process. In fact, poorly designed and poorly looked-after self-service apps account for roughly 10% of all adult baldness, according to research I might one day conduct.

When contact center pros hear or read “QA”, they need to think not only “Quality Assurance” but also “Quality Automation.” The latter is very much part of the former.

To ensure that customers who go the self-service route have a positive experience and maintain their hair, the best contact centers frequently conduct comprehensive internal testing of IVR systems and online applications, regularly monitor customers’ actual self-service interactions, and gather customer feedback on their experiences. Let’s take a closer look at each of these critical practices.

Testing Self-Service Performance

Testing the IVR involves calling the contact center and interacting with the IVR system just as a customer would, only with much less groaning and swearing. Evaluate such things as menu logic, awkward silences, speech recognition performance and – to gauge the experience of callers that choose to opt out of the IVR – hold times and call-routing precision.

Testing of web self-service apps is similar, but takes place online rather than via calls. Carefully check site and account security, the accuracy and relevance of FAQ responses, the performance of search engines, knowledge bases and automated agent bots. Resist the urge to try to see if you can get the automated bot to say dirty words. There’s no time for such shenanigans. Testing should also include evaluating how easy it is for customers to access personal accounts online and complete transactions.

Some of the richest and laziest contact centers have invested in products that automate the testing process. Today’s powerful end-to-end IVR monitoring and diagnostic tools are able to dial in and navigate through an interactive voice transaction just as a real caller would, and can track and report on key quality and efficiency issues. Other centers achieve testing success by contracting with a third-party vendor that specializes in testing voice and web self-service systems and taking your money.

Monitoring Customers’ Self-Service Interactions

Advancements in quality monitoring technologies are making things easier for contact centers looking to spy on actual customers who attempt self-service transactions. All the major quality monitoring vendors provide customer interaction re­cording applications that capture how easy it is for callers to navigate the IVR and complete transactions without agent assistance, as well as how effectively such front-end systems route each call after the caller opts out to speak to an actual human being.

As for monitoring the online customer experience, top contact centers have taken advantage of multichannel customer interaction-recording solutions. Such solutions enable contact centers to find out first-hand such things as: how well customers navigate the website; what information they are looking for and how easy it is to find; what actions or issues lead most online customers to abandon their shopping carts; and what causes customers to call, email or request a chat session with an agent rather than continue to cry while attempting to serve themselves.

As with internal testing of self-service apps, some centers – rather than deploying advanced monitoring systems in-house – have contracted with a third-party specialist to conduct comprehensive monitoring of the customers’ IVR and/or web self-service experiences.

Capturing the Customer Experience

In the end, the customer is the real judge of quality. As important as self-service testing and monitoring is, even more vital is asking customers directly just how bad their recent self-service experience was.

The best centers have a post-contact C-Sat survey process in place for self-service, just as they do for traditional phone, email and chat contacts. Typically, these center conduct said surveys via the same channel as the customer used to interact with the company. That is, customers who complete (or at least attempt to complete) a transaction via the center’s IVR system are invited to complete a concise automated survey via the IVR (immediately following their interaction). Those who served themselves via the company’s website are soon sent a web-based survey form via email. Customers, you see, like it when you pay attention to their channel preferences, and thus are more likely to complete surveys that show you’ve done just that. Calling a web self-service customer and asking them to complete a survey over the phone is akin to finding out somebody is vegetarian and then offering them a steak.

It’s your call

Whether you decide to do self-service QA manually, invest in special technology, or contract with third-party specialists is entirely up to you and your organization. But if you don’t do any of these things and continue to ignore quality and the customer experience on the self-service side, don’t act surprised if your customers eventually start ignoring you – and start imploring others to do the same.

Thanks to Call Centre IQ for the article. 

Avoid the VM Blackout: A Guide to Effective Server Monitoring

When it comes to IT services, business value and user satisfaction are both dependent upon the server, network, and applications all working together seamlessly.

Failure to adequately monitor each of these and their interactions, means that you could be flying blind – susceptible to degraded service levels.

While application and network monitoring receive a lot of the attention, it is important to also understand what’s going on with the server.

Virtualization changes the face of service delivery

The environment in which modern services run is complex. Superficially, it appears as though we’ve traveled back to the 1960s, with data centers again appearing like big monolithic constructs (whether cloud or internally hosted) with highly-virtualized server farms connecting through large core networks.

The emergence of virtualized clients (with most computing done remotely) takes the analogy a step further and makes it feel as if we are on the set of “Mad Men” with the old dumb terminals connected to the mainframe.

But that may be where the analogy ends. Today’s IT service delivery is almost never performed in a homogeneous vendor setting—from a hardware or software perspective. Likewise, the diversity of complex multi-tier applications and methods by which they are accessed continues to proliferate.

To learn more, download the white paper.

Avoid the VM Blackout: A Guide to Effective Server Monitoring

 

Thanks to Network Instruments for the article.

The Balancing Act: Education

The Balancing Act: EducationThe Balancing Act Series will delve into how Sapling’s synchronized clock systems can help professionals in several different industries better balance their daily tasks.

Educational facilities can become chaotic at times with all of the different activities transpiring throughout the day. Maintaining a certain level of organization can be challenging for the faculty and staff of any school. Daily operations becomes one big balance act for the staff, and even for the students. Teachers in a high school, for example, must balance all of their daily tasks in order to remain on schedule.

Being efficient in time management is a huge asset that can contribute to a much smoother work week for an educator. Teachers must construct a lesson plan, maintain regular office hours, set deadlines, and grant extensions, among many others. Balancing these requirements can be a weary task, and in order to complete them, an effective time management strategy must be deployed. This begins with the installation of a synchronized clock system.

A synchronized time system is a system of clocks within a facility receiving accurate time updates from a master clock. The master clock is able to receive the updated time from an NTP server or GPS, and distributes it among the secondary clock. Sapling manufactures different types of innovative synchronized clock systems, including Wired, Wireless, IP and TalkBack. Sapling’s systems are seen in schools across the country, and assist in helping educators balance their work load every day.

A synchronized clock system from Sapling will ensure all of the bells in a facility are activated at the same time. This will help minimize late students, which can lead to an easier transitions for teachers. With the entire school on the same, synchronized time, teachers can meet with students punctually during office hours. Synchronized clocks can lead to better time management for teaching professionals, which can help them to delegate their tasks efficiently and balance their hefty workload with more simplicity.

Sapling’s synchronized clock systems have been helping teachers balance their daily operations since the company’s conception over two decades ago. The innovative clock systems of Sapling have been pioneers in the clock industry.

Thanks to Sapling for the article. 

Ixia Study Finds That Hidden Dangers Remain within Enterprise Network Virtualization Implementations

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced global survey results demonstrating that while most companies believe virtualization technology is a strategic priority, there are clear risks that need to be addressed. Ixia surveyed more than 430 targeted respondents in South and North America (50 percent), APAC (26 percent) and EMEA (24 percent).

The accompanying report titled, The State of Virtualization for Visibility Architecture™ 2015 highlights key findings from the survey, including:

  • Virtualization technology could create an environment for hidden dangers within enterprise networks. When asked about top virtualization concerns, over one third of respondents said they were concerned with their ability (or lack thereof) to monitor the virtual environment. In addition, only 37 percent of the respondents noted they are monitoring their virtualized environment in the same manner as their physical environment. This demonstrates that there is insufficient monitoring of virtual environments. At the same time, over 2/3 of the respondents are using virtualization technology for their business-critical applications. Without proper visibility, IT is blind to any business-critical east-west traffic that is being passed between the virtual machines.
  • There are knowledge gaps regarding the use of visibility technology in virtual environments. Approximately half of the respondents were unfamiliar with common virtualization monitoring technology – such as virtual tap and network packet brokers. This finding indicates an awareness gap about the technology itself and its ability to alleviate concerns around security, performance and compliance issues. Additionally, less than 25 percent have a central group responsible for collecting and monitoring data, which leads to a higher probability for a lack of consistent monitoring and can pose a huge potential for improper monitoring.
  • Virtualization technology adoption is likely to continue at its current pace for the next two years. Almost 75 percent of businesses are using virtualization technology in their production environment, and 65 percent intend to increase their use of virtualization technology in the next two years
  • Visibility and monitoring adoption is likely to continue growing at a consistent pace. The survey found that a large majority (82 percent) agree that monitoring is important. While 31 percent of respondents indicated they plan on maintaining current levels of monitoring capabilities, nearly 38 percent of businesses plan to increase their monitoring capabilities over the next two years.

“Virtualization can bring companies incredible benefits – whether in the form of cost or time saved,” said Fred Kost, Vice President of Security Solutions Marketing, Ixia. “At Ixia, we recognize the importance of this technology transformation, but also understand the risks that are involved. With our solutions, we are able to give organizations the necessary visibility so they are able to deploy virtualization technology with confidence.”

Download the full research report here.

Ixia's The State of Virtualization for Visibility Achitectures 2015

Thanks to Ixia for the article.

Be Ready for SDN/ NFV with StableNet®

Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two terms that have garnered a great deal of attention in the Telco market over the last couple of years. However, before actually adopting SDN/NFV, several challenges have to be mastered. This article discusses those challenges and explains why StableNet® is the right solution to address them.

SDN and NFV are both very promising approaches. The main objectives are to increase the flexibility of the control and to reduce costs by moving from expensive special purpose hardware to common off-the-shelf devices. SDN enables the separation of the control plane and data plane, which results in better control plane programmability, flexibility, and much cheaper costs in the data plane. NFV is similar but differs in detail. This concept aims at removing network functions from inside the network and putting them into typically central places, such as datacenters.

Six things to think about before implementing SDN/NFV

The benefits of SDN and NFV seem evident. Both promise to increase flexibility and reduce cost. The idea of common standards seems to further ease the configuration and handling of an SDN or NFV infrastructure. However, our experience shows that with the decision to “go SDN or NFV” a lot of new challenges arise. Six of the major ones – far from a complete list – are addressed in the following:

1. Bootstrapping/”Underlay”- configuration:

How to get the SDN/NFV infrastructure setup before any SDN/NFV protocols can actually be used?

2. Smooth migration:

How to smoothly migrate from an existing environment while continuously assuring the management of the entire system consisting of SDN/NFV and legacy elements?

3. Configuration transparency and visualization:

How to assure that configurations via an abstracted northbound API are actually realized on the southbound API in the proper and desired way?

4. SLA monitoring and compliance:

How to guarantee that the improved applicationawareness expected from SDN and NFV really brings the expected benefits? How to objectively monitor the combined SDN/NFV and legacy network, its flows, services and corresponding KPIs? How to show that the expected benefits have been realized and quantify that to justify the SDN/NFV migration expenses?

5. Competing standards and proprietary solutions:

How to orchestrate different standardized northbound APIs as well as vendor-specific flavors of SDN and NFV?

6. Localization of failures:

How to locate failures and their root cause in a centralized infrastructure without any distributed intelligence?

Almost certainly, some or even all of these challenges are relevant to any SDN/NFV use case. Solutions need to be found before adopting SDN/NFV.

  • StableNet® SDN/NFV Portfolio – Get ready to go SDN/NFV StableNet® is a fully integrated 4 in 1 solution in a single product and data structure, which includes Configuration, Performance, and Fault Management, as well as Network and Service Discovery. By integrating SDN and NFV, StableNet® offers leverage the following benefits:
  • Orchestration of large multi-vendor SDN/NFV and legacy environments- StableNet® is equipped with a powerful and highly automated discovery engine. Besides its own enhanced network CMDB, it offers inventory integration with third party CMDBs. Furthermore, StableNet® supports over 125 different standardized and vendor-specific interfaces and protocols. Altogether, this leads to an ultra-scalable unified inventory for legacy and SDN/NFV environments.
  • KPI measurements and SLA assurance- The StableNet® Performance Module offers holistic service monitoring on both server and network levels. It thereby combines traditional monitoring approaches, such as NetFlow with new SDN and NFV monitoring approaches. A powerful script engine allows to configure sophisticated End-to-End monitoring scripts. The availability of cheap Plug & Play StableNet® Embedded Agents furthermore simplifies the distributed measurement of a service. Altogether, this gives the possibility to measure all the necessary KPIs of a service and to assure its SLA compliance.
  • Increased availability and mitigation of failures in mixed environments- The StableNet® Fault and Impact Modules with the SDN extension combines a device-based automated root cause analysis with a service-based impact analysis to provide service assurance and fulfillment.
  • Automated service provisioning, including SDN/NFV– StableNet® offers an ultra-scalable, automated change management system. An integration with SDN northbound interfaces adds the ability to configure SDN devices. Support of various standardized and vendorspecific virtualization solutions paves the way for NFV deployments. StableNet® also offers options to help keep track of changes done to the configuration or to check for policy violations and vulnerabilities.

StableNet® Service Workflow – Predestined for SDN/NFV The increasing IT complexity in today’s enterprises more and more demands for a holistic, aggregated view on the services in a network, including all involved entities, e.g. network components, servers and user devices.

The availability of this service view, including SDN/NFV components, facilitates different NMS tasks, such as SLA monitoring or NCCM.

The definition, rollout, monitoring, and analysis of services is an integral part of the Service Workflow offered by StableNet®. This workflow, see Figure 1, is also predestined to ease the management of SDN and NFV infrastructures.

Figure 1: StableNet® Service WorkFlow – predestined for SDN/NFV management

Be Ready for SDN/ NFV with StableNet®

Trend towards “Virtualize Everything”

Besides SDN and NFV adoption trending upwards, there is also an emerging trend that stipulates “virtualize everything”. Virtualizing servers, software installations, network functions, and even the network management system itself leads to the largest economies of scale and maximum cost reductions.

StableNet® is fully ready to be deployed in virtualized environments or as a cloud service. An excerpt of the StableNet® Management Portfolio is shown in Figure 2.

Figure 2: StableNet® Management Portfolio – Excerpt

Be Ready for SDN/ NFV with StableNet®

Thanks to InterComms for the article. 

Solving 3 Key Network Security Challenges

Solving 3 Key Network Security Challenges

With high profile attacks from 2014 still fresh on the minds of IT professionals and almost half of companies being victims of an attack during the last year, it’s not surprising that security teams are seeking additional resources to augment defenses and investigate attacks.

As IT resources shift to security, network teams are finding new roles in the battle to protect network data. To be an effective asset in the battle, it’s critical to understand the involvement and roles of network professionals in security as well as the 3 greatest challenges they face.

Assisting the Security Team

The recently released State of the Network Global Study asked 322 network professionals about their emerging roles in network security. Eighty-five percent of respondents indicated that their organization’s network team was involved in handling security. Not only have network teams spent considerable time managing security issues but the amount of time has also increased over the past year:

  • One in four spends more than 10 hours per week on security
  • Almost 70 percent indicated time spent on security has increased

Solving 3 Key Network Security Challenges

Roles in Defending the Network

From the number of responses above 50 percent, the majority of network teams are involved with many security-related tasks. The top two roles for respondents – implementing preventative measures (65 percent) and investigating security breaches (58 percent) – mean they are working closely with security teams on handling threats both proactively and after-the-fact.

Solving 3 Key Network Security Challenges

3 Key Security Challenges

Half of respondents indicated the greatest security challenge was an inability to correlate security and network performance. This was followed closely by an inability to replay anomalous security issues (44 percent) and a lack of understanding to diagnose security issues (41 percent).

Solving 3 Key Network Security Challenges

The Packet Capture Solution

These three challenges point to an inability of the network team to gain context to quickly and accurately diagnose security issues. The solution lies in the packets.

  • Correlating Network and Security Issues

Within performance management solutions like Observer Platform, utilize baselining and behavior analysis to identify anomalous client, server, or network activities. Additionally, viewing top talkers and bandwidth utilization reports, can identify whether clients or servers are generating unexpectedly high amounts of traffic indicative of a compromised resource.

  • Replaying Issues for Context

The inability to replay and diagnose security issues points to long-term packet capture being an under-utilized resource in security investigations. Replaying captured events via retrospective analysis appliances like GigaStor provides full context to identify compromised resources, exploits utilized, and occurrences of data theft.

As network teams are called upon to assist in security investigations, effective use of packet analysis is critical for quick and accurate investigation and remediation. Learn from cyber forensics investigators how to effectively work with security teams on threat prevention, investigations, and cleanup efforts at the How to Catch a Hacker Webinar. Our experts will uncover exploits and share top security strategies for network teams.

Thanks to Network Instruments for the article.

The Many Faces of Sapling Analog Clocks

The Many Faces of Sapling Analog Clocks

When equipping a Sapling synchronized time system a customer has many choices to make. Sapling offers four systems, each with a multitude of options to choose from. After choosing a system the next step is to decide whether you want analog clocks, digital clocks, or a combination of the two. Analog clocks are an excellent choice for many different types of facilities. Sapling offers analog clocks in different models for each of our reliable systems. The variety of systems equates to a need for different power sources; our analog clocks can be powered locally, over Ethernet, or by battery.

Depending on the owner’s preferences Sapling analog clocks can vary quite a bit in terms of appearance. Possibly the most visibly obvious choice a customer must make when ordering an analog clock is shape. Sapling offers all of their analog clocks in both round and square options. Round clocks are available in two sizes: 12″, and 16″. Square clocks are available in two sizes: 9″ and 12″. Aside from size and shape, Sapling also offers design options for analog clocks. Customers can choose from two standard dials or six specialty dials. Sapling also offers the option of a custom dial, which allows the customer to have a name, logo, or image of their choice printed on the dial. In terms of color Sapling offers black and white analog clocks, with the option of color customization available. The last customization option for Sapling analog clocks are the hands. Sapling offers three specialty hand options in addition to the standard hands.

The availability of custom and specialty options  gives you the power to make your system all your own. Our quality analog clocks will look great in any facility, while providing the accuracy Sapling has become synonymous with. Keeping appearance and accuracy ingeniously in sync, that’s just part of The Sapling Advantage!

Thanks to Sapling for the article.

Rogers’ Q1 Mobile, Internet Gains Make up for TV, Landline Declines

Canadian quadruple-play operator Rogers Communications reports that its consolidated revenue increased 5% year-on-year in the first quarter of 2015 to CAD3.175 billion (USD2.599 billion), reflecting revenue growth of 4% in Wireless, 1% in Cable, and 26% in Media, with stable revenue in Business Solutions. Wireless turnover increased as a result of higher network revenue from the continued movement of customers to LTE, and the adoption of higher average revenue per user (ARPU) generating ‘Share Everything’ plans, as well as greater smartphone sales. Cable revenue was relatively stable as continued internet revenue growth was offset by decreased revenue from pay-TV and residential fixed line telephony.

Thanks to TeleGeography for the article.

Application Intelligence Supercharges Network Security

I was recently at a conference where the topic of network security came up again, like it always does. It seems like there might be a little more attention on it now, not really due to the number of breaches—although that plays into a little—but more because companies are being held accountable for allowing the breaches. Examples include Target (where both the CIO and CEO got fired over that breach in 2013) and the fact that the FCC and FTC are fining companies (like YourTel America, TerraCom, Presbyterian Hospital, and Columbia University) that allow a breach to compromise customer data.

This is an area where application intelligence could be used to help IT engineers. Just to be clear, application intelligence won’t fix ALL of your security problems, but it can give you additional and useful information that was very difficult to ascertain before now. For those that haven’t heard about application intelligence, this technology is available through certain network packet brokers (NPBs). It’s an extended functionality that allows you to go beyond Layer 2 through 4 (of the OSI model) packet filtering to reach all the way into Layer 7 (the application layer) of the packet data.

The benefit here is that rich data on the behavior and location of users and applications can be created and exported in any format needed—raw packets, filtered packets, or NetFlow information. IT teams can identify hidden network applications, mitigate network security threats from rogue applications and user types, and reduce network outages and/or improve network performance due to application data information.

Application Intelligence Supercharges Network SecurityIn short, application intelligence is basically the real-time visualization of application level data. This includes the dynamic identification of known and unknown applications on the network, application traffic and bandwidth use, detailed breakdowns of applications in use by application type, and geo-locations of users and devices while accessing applications.

Distinct signatures for known and unknown applications can be identified, captured, and passed on to specialized monitoring tools to provide network managers a complete view of their network. The filtered application information is typically sent on to 3rd party monitoring tools (e.g. Plixer, Splunk, etc.) as NetFlow information but could also be consumed through a direct user interface in the NPB. The benefit to sending the information to 3rd party monitoring tools is that it often gives them more granular, detailed application data than they would have otherwise to improve their efficiency.

With the number of applications on service provider and enterprise networks rapidly increasing, application intelligence provides unprecedented visibility to enable IT organizations to identify unknown network applications. This level of insight helps mitigate network security threats from suspicious applications and locations. It also allows IT engineers to spot trends in application usage which can be used to predict, and then prevent, congestion.

Application intelligence effectively allows you to create an early warning system for real-time vigilance. In the context of improving network security, application intelligence can provide the following benefits:

  • Identify suspicious/unknown applications on the network
  • Identify suspicious behavior by correlating connections with geography and known bad sites
  • Identify prohibited applications that may be running on your network
  • Proactively identify new user applications consuming network resources

Application Intelligence Supercharges Network Security

A core feature of application intelligence is the ability to quickly identify ALL applications on a network. This allows you to know exactly what is or is not running on your network. The feature is often an eye opener for IT teams, and they are surprised to find out that there are actually applications on their network they knew nothing about. Another key feature is that all applications are identified by a signature. If the application is unknown, a signature can be developed to record its existence. These unknown application signatures should be the first step as part of IT threat detection procedures so that you can identify any hidden/unknown network applications and user types. The ATI Processor correlates applications with geography, and can identify compromised devices and malicious activities such as Command and Control (CNC) communications from malicious botnet activities.

A second feature of application intelligence is the ability to visualize the application traffic on a world map for a quick view of traffic sources and destinations. This allows you to isolate specific application activity by granular geography (country, region, and even neighborhood). User information can then be correlated with this information to further identify and locate rogue traffic. For instance, maybe there is a user in North Korea that is hitting an FTP server in Dallas, TX and transferring files off network. If you have no authorized users in North Korea, this should be treated as highly suspicious. At this point, you can then implement your standard security protocols—e.g., kill the application session immediately, capture origin and destination information, capture file transfer information, etc.

Another way of using application intelligence is to audit your network policies and usage of those policies. For instance, maybe your official policy is for employees to use Outlook for email. All inbound email traffic is then passed through an anti-viral/malware scanner before any attachments are allowed entry into the network. With application intelligence, you would be able to tell if users are following this policy or whether some are using Google mail and downloading attachments directly through that service, which is bypassing your malware scanner. Not only would this be a violation of your policies, it presents a very real threat vector for malware to enter your network and commence its dirty work.

Ixia’s Application and Threat Intelligence (ATI) Processor brings intelligent functionality to the network packet broker landscape with its patent pending technology that dynamically identifies all applications running on a network. The Ixia ATI Processor is a 48 x 10GE interface card that can be used standalone in a compact 1 rack unit high chassis or within an Ixia Net Tool Optimizer (NTO) 7300 network packet broker (NPB) for a high port density option.

As new network security threats emerge, the ATI Processor helps IT improve their overall security with better intelligence for their existing security tools. To learn more, please visit the ATI Processor product page or contact us to see a demo!

Additional Resources:

Thanks to Ixia for the article.

Infosim® Global Webinar Day April 30th, 2015- StableNet® Embedded Agent (SNEA) Cloud Solution

Infosim® Global Webinar Day April 30th, 2015- StableNet® Embedded Agent (SNEA) Cloud Solution

Are footprint size, price, scalability and ease of deployment causing disruptions in your monitoring strategy?

Then you need to join Marius Heuler, CTO Infosim for a Webinar and Live Demo on the “StableNet® Embedded Agent (SNEA) Cloud Solution“.

We will introduce the StableNet® Embedded Agent which has the smallest footprint in the market. We are talking about a device that will literally fit in your shirt pocket! We offer it at the best cost-performance ratio in the market! One of our customers is running more than 50,000 of these in a pan-European installation, so this solution is most definitely the most scalable in the market! The SNEA is delivered preconfigured which leads to a zero touch installation and deployment on the end customer’s side.

Infosim® Global Webinar Day April 30th, 2015- StableNet® Embedded Agent (SNEA) Cloud SolutionThis Webinar will provide insight into:

  • SNEA technical foundation
  • SNEA hosted cloud solution
  • SNEA use cases
  • StableNet® Embedded Agent Cloud Solution (Live Demo)

Register today and reserve your seat in the desired timezone:

AMERICAS, Thursday, April 30th, 3:00 pm – 3:30 pm EDT (GMT-4)

A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.