Shomi The Money? Rogers, Shaw Form Stream Team

Rival Canadian cablecos Rogers Communications and Shaw Communications have announced they are jointly launching an online streaming TV and film content subscription service, named ‘shomi’, which will arrive in a beta version in November, exclusively for Rogers and Shaw customers at first for CAD8.99 (USD8.24) per month. shomi content will initially include more than 14,000 episodes, including 340 TV shows and 1,200 movies, the operators announced, while the price is slightly above the current monthly subscription fee for popular streaming service Netflix in Canada. Rogers already offers its subscribers ‘Rogers on Demand’ content while similarly Shaw offers the on-demand ‘Global Go’ service, but the companies stress that shomi will include significantly different content.

Thanks to TeleGeography for the article.

Observer Infrastructure: Adding The Device Performance Perspective

Network ManagementIn April 2010, Network Instruments® announced the availability of Observer Infrastructure (OI), an integral element of their Observer® performance management solution, which complements their primary packet-oriented network and application performance monitoring products with an infrastructure viewpoint. An evolutionary replacement for Network Instruments Link Analyst™, OI has been significantly expanded and includes network infrastructure device health monitoring, IP SLA active testing, NBAR and WAAS data collection, and server monitoring. These new performance data sources are tightly integrated into a common console and reporting engine, Observer Reporting Server, providing quick and efficient workflows for navigating between packet and polling realms.

Issues

As network technologies have matured and architectural redundancies have improved availability, the focus of networking professionals has turned toward performance and optimization. Along with that shift comes a change in the types of issues demanding attention (away from failures and towards degradations) plus a change in scope (away from network device specifics and towards application and service awareness). Network performance management is the discipline of planning, monitoring, and managing networks to assure that they are delivering the applications and services which customers and end users consume and which underpin business processes. A high-performing network, managed in a way which is business-aware, can become a strategic asset to businesses of all sizes and purposes, and hence operations must also move from reactive firefighting of performance issues towards proactive prevention via methodical and vigilant monitoring and analysis.

Network performance management has been an active area of practice for decades. Initial efforts were focused primarily on the health and activity of each individual network device, mostly using SNMP MIBs, both proprietary and standardized, collected by periodic polling. This approach was supplemented by now-obsolete standards such as RMON for providing views into traffic statistics on an application-by-application basis. Today, additional techniques have been established for measuring various aspects of network performance and are used broadly today:

  • Synthetic agents provide samples and tests of network throughput and efficiency
  • Direct-attached probes inspect packets to track and illuminate performance across the entire stack
  • Flow records issued by network infrastructure devices record traffic details

So which one is the best? And which ones are important in order to achieve best practices for businessaware, proactive performance management? In the end, no single method meets all the needs. The best approach is to integrate multiple techniques and viewpoints into a common operational monitoring and management platform.

Building the Complete Performance Management Picture

Making the transition from reactive to proactive and from tactical to strategic in the long term requires the assembly of a complete performance management picture. And as with any journey, there are options for where to start and how to get there. Most practitioners start with a focus on the network infrastructure devices by adding basic health monitoring to their fault/event/alert regime, but find it insufficient for troubleshooting. Others will deploy packet monitoring, which provides application awareness together with network-layer details and definitive troubleshooting, but find that collecting packets everywhere is difficult to achieve. Still others will look to NetFlow to give them insights into network activity, or perhaps deploy synthetic agents to give them the 24×7 coverage for assuring critical applications or services, but these approaches have their shortcomings as well.

Where you start may not be as important as where you end up. Each measurement technique has something important to add to the operations picture:

  • SNMP and/or WMI polling gives you details about the specific health and activity within an individual network device or node – important for device-specific troubleshooting and for capacity planning.
  • SNMP can also be used to gather specific flow-oriented performance metrics from devices that offer application recognition for optimization and security, such as Cisco’s WAAS (Wide Area Application Services) solution and NBAR (Network-Based Application Recognition) features.
  • Agent-based active or synthetic testing, such as the IP SLA features resident in Cisco network devices, enables regular/systematic assessment of network responsiveness as well as application performance and VoIP quality.
  • Packet inspection, either real-time or historical/forensic, is the ultimate source of detail and truth, revealing traffic volumes and quality of delivery across all layers of the delivery stack, and is indispensible for sorting out the really tough/subtle/intermittent degradation issues.
  • NetFlow (or other similar flow record formats) provides application activity data where direct packet instrumentation is not available/practical.

Ultimately, the better integrated these data sources and types, the more powerful the solution. Integration must take place at multiple levels as well. At the presentation/analysis layer, bringing together multiple types of performance data improves visibility in terms of both breadth and depth. At the data/model layer, integration allows efficient workflows, and more intelligent (and even automated) analysis by revealing trends, dependencies, and patterns of indicators that must otherwise be reconciled manually.

Network Instruments Observer Infrastructure

Network Instruments has introduced Observer Infrastructure (OI) to extend their network and application performance management solution by tightly integrating device-based performance data at both the data and presentation layers. OI adds the infrastructure performance perspective to the packetbased and NetFlow-based capabilities of the existing Observer Solution. It also contributes IP SLA support as well as support for other complementary SNMP-gathered data sets, such as Cisco’s NBAR and WAAS features. OI goes further, delivering visibility into virtualized servers via WMI, at both the virtual machine and hypervisor levels. Another new capability is OI’s support for distributed polling and collection, allowing the infrastructure perspective to be applied across large, distributed managed environments.

Key implications of the newly enhanced Observer Infrastructure solution include:

  • Faster, more effective troubleshooting via complementary viewpoints of performance within enabling or connected infrastructure elements.
  • Better planning capabilities, allowing engineers to match capacity trends with specific details of underlying traffic contributors and drivers.
  • More proactive stance, by leveraging synthetic IP SLA tests to assess delivery quality and integrity on a sustained basis.
  • Improved scalability of the total Observer Solution, via the newly distributed OI architecture

EMA Perspective

ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) analysts strongly advocate the use of a broad and balanced approach to network and application performance management, drawing from the unique and valuable contributions of several types of performance monitoring instrumentation strategies. In parallel, data from such hybrid monitoring architectures must be integrated into presentation and workflows in a way that it can facilitate operational. efficiency and effectiveness and pave the way to proactive practices.

Network Instruments has a well-established footprint in packet-based solutions and substantial experience with infrastructure monitoring. The newly released Observer Infrastructure is an evolutionary product based on stable, long-fielded technologies which has been expanded in functionality and also has been more tightly integrated with the rest of the Network Instruments Observer solution, including presentation through Observer Reporting Server and workflow links to GigaStor™. The result is a powerful expansion to the Observer solution, both for existing customers as well as any network engineering and operations team looking for a comprehensive, holistic approach to network and application performance management.

Thanks to Network Instruments for the article.

IQ Services Delivers Successful Results to Financial Services Company

A pre-eminent financial services firm provides products for customers around the world. Dedicated to serving clients’ financial needs‚ the firm is committed to delivering the best customer experience possible. Its commitment doesn’t stop with clients‚ but extends to improving the lives of individuals in communities around the world.

A Business Challenge

The customer was interested in implementing a web services-based self-service platform to bring together web services and IP telephony in all of its contact centers worldwide. However‚ the company’s telecommunications systems and contact centers used a Linux server configuration that differed from the Linux platform certified for the proposed platform. With a minimum 10,000 busy hour call completions (BHCC) requirement, the company needed confirmation that the proposed platform could handle the volume generated in its speech recognition-enabled, web services environment.

Key Capabilities of the Solution

The customer and its solution provider turned to IQ Services to demonstrate the solution’s performance capabilities with real calls.

The team worked with IQ Services to sketch out a plan and performance testing objectives to simulate real-time calling patterns in a lab environment. The plan also provided opportunities to tune the overall implementation during testing‚ if required‚ to reach the production load conditions of 10,000 BHCC.

Seamless Transition to a New System

Working with the customer, IQ Services developed a remote testing plan to simulate production level traffic in the test environment. The test would use key components in a typical production system implementation – PSTN access‚ call control‚ interactive voice response (IVR)‚ speech recognition and web services – all controlled by the company’s proposed platform application. By gradually increasing traffic‚ the plan allowed the customer to observe the integrated solution performance under increasing traffic conditions‚ tuning component configurations as required to meet the test objectives.

Once testing began‚ IQ Services provided controlled call traffic into the solution and digitally recorded each telephone call end-to-end. The recordings allowed quick issue identification of issues‚ which were researched by a member of the technical support team and either resolved or logged for later follow-up. In addition, IQ Services provided real-time test results to the financial services firm online and via a test conference bridge‚ keeping the customer apprised of issues and allowing swift resolution of unsuccessful test events.

Ultimately, the solution required three rounds of performance testing. Insight gained during the first two rounds was used to iteratively tune the integrated solution and prepare for the third and final test. The last test verified that the uncertified Linux platform supported the proposed platform for production traffic of 10,000 BHCC, allowing the customer to move ahead with its worldwide deployment.

“By testing the end-to-end solution with IQ Services‚ we gained the confidence we needed to take our preferred solution to market‚” said the customer’s technical leader.“The testing confirmed that the integration was a success.”

Benefits for the Customer

The flexible‚ responsive test implementation and scheduling allowed the customer and its solution partner to establish and meet the unique test objectives by performing testing with little notice whenever needed. In addition‚ the company has received:

  • Assurance to implement the proposed platform solution worldwide
  • Documented solution performance test results of a minimum 10‚000 BHCC
  • Actionable data gathered from initial and secondary testing efforts‚ leading to faster issue resolution and reduced effort and cost
  • Verification that the proposed platform performed as desired on its preferred Linux platform

Thanks to IQ Services for the article.

Why Network Monitoring Is Changing?

IT needs end-to-end visibility, meaning tool access to any point in the physical and virtual network, and it has to be scalable and easy to manage, says Jason Echols, IXIA

Networking is a rapidly evolving and changing landscape. We’ve quickly moved from parsing bits and bytes from workstation to workstation to providing powerful applications and services to millions of consumers. Speed and bandwidth requirements have grown exponentially, new traffic types appear daily, and now a functioning networkis a crucial and necessary part of running a successful large or small business in any market.The portion of traffic residing within the data center will remain the majority throughout the forecast period, accounting for 76 per cent of data center traffic. Data center traffic on a global scale will grow at a 25 per cent CAGR.

At the same time that technologies like cloud and LTE are proliferating, the number of applications, both beneficial and malicious, are rapidly increasing and place further demands upon service providers and large enterprises. These organizations have to maintain constant vigilance to be able to respond in real time to eliminate network blind spots and identify hidden network applications so they can mitigate the network security threats from rogue applications and users.

Rapid evolution of the data center creates both urgent challenges and outstanding revenue opportunities. Of course, these vary according to the organisation and its unique goals and concerns. However, all IT teams have to deal with the proliferation of devices from multiple vendors, threats in every form, and scalability needs.

As business networks continue to respond to user demands of access to more data, BYOD and the Internet of Things, a new chapter has opened for IT personnel. 89 per cent of Global IT professionals reporting personal devices on corporate networks, more than 10 per cent of organizations that are “fully aware” of all the devices accessing their network and 1 in 4 employed adults who have been a victim of malware or hacking on a personal device. These numbers are staggering and alarming too. While much of the traffic that runs through service provider and enterprise networks is stateful and application-based, access to application and user data has been costly and often lacking. Simply looking at layers 2-4 of the OSI model no longer provides deep insight into the character of the traffic. While the layer 2 – 4 data continues to have value, to really understand your network infrastructure and how to respond to customer demands, you need to see what applications are running and look at performance artifacts at the application layer, i.e. layer 7 information.

IT organisations are tasked with providing their customers with connectivity for communication and for their business critical applications. Customer expectations are now higher and more service-focused-infrastructure and simply creating a functioning network are mere “table stakes” in the network management game.

IT is expected to provide the highest possible customer experience in a secure and always-up network environment. In order to meet these new demands for impeccable service, ITorganizations must deal with a myriad of dynamic forces that challenge their ability to meet expectations:

  • Growth – Growth is a new constant. It encompasses all aspects of networking, from new users, new applications and services, new use cases, faster processing, migrating networks from 1GE to 10GEto 40GE and 100GE.
  • Workforce Mobilisation – Users are on the move. People no longer expect to access the network from only one location. They expect to interact with data wherever they are.
  • Infrastructure and technology changes – Change is the only permanent thing. Such advancements as virtualization, cloud services, and software defined networking (SDNs) must be seamlessly integrated into the existing network. At the same time, service level agreements (SLAs) must be maintained.
  • Security – Change creates threat. Bring your own device (BYOD), social networking, a mobile workforce, and new services open up weaknesses in network defenses. With our ever-increasing use of networking, intrusions and exploits promise to compromise network security.

Another needed component to visibility is application intelligence (the ability to monitor packets based on application type and usage). This technology is the next evolution in network visibility.

Ixia Visibility Architecture

Application intelligence can be used to dynamically identify all applications running on a network. Distinct signatures for known and unknown applications can be identified and captured to give network managers a complete view of their network. In addition, well designed visibility solutions will generate additional (contextual) information such as geolocation of application usage, network user types, operating systems and browser types that are in use on the network.

Many companies are using an adaptive, intelligent network visibility architecture. A visibility architecture is a holistic approach to network visibility that controls costs and administrative burdens, while optimizing the investment value of monitoring and security tools. A visibility architecture helps speed application delivery and enables effective troubleshooting and monitoring for network security, application performance, and service level agreement (SLA) fulfillment – and allows IT to meet compliance mandates.

  • Helps eliminate blind spots in the network by getting the right information to the right tools at the right time, greatly improving performance, QoE, and security.
  • Includes physical and virtual taps, bypass switches, and full-featured network packet brokers that solve network visibility needs – from single-point solutions to large-scale enterprise and service provider network deployments.
  • Extends the life of existing IT tools investments and maximizes the usefulness of the current tool capacity.
  • Easily integrates into automated and software defined data center environments.
  • Provides network visibility that is scalable in every sense of the word – from product, to portfolio, to design, to management and support.

Network and IT organisations are caught in a constant cycle of deploying new services, supporting new use cases, and managing growth – which results in networks that are always trying to get back to a reliable state before the next rounds of change hits.

One of the results of these changes over the last 15-20 years is there are more monitoring, visibility, and security tools in use today than ever before. In fact, these tools are typically required today for all enterprise data center and campus networks, as well as service provider IT, data, and LTE production networks.

But all of these tools need access to data on the production network. In fact, most of the tools function better when they get data from across the entire network, including the data center, security DMZs, the network core, and the different campus and remote office locations. However, the problem is many of these tools aren’t getting the data access they need.

IT needs end-to-end visibility, meaning tool access to any point in the physical and virtual network, and it has to be scalable and easy to manage. But more than that, IT needs control. These tools often can’t handle all the traffic from across the network, so IT needs the ability to control what information is directed to each tool – and they need to do all this within existing budget constraints.

Thanks to Business World for the article. 

Surge In Mobile Workforce And Proliferation Of Smartphones Fueling Growth In Global UC Market

Unified CommunicationsConnected devices such as smartphones and tablets have changed the way work is done today – in the office and beyond. An ever-expanding mobile workforce has prompted enterprises to come up with friendly BOYD policies, boosting enterprise-wide collaboration and ultimately leading to better efficiency and improved bottom line. This has also contributed to the growth of unified communications market worldwide.

Unified Communications solutions are designed to unify voice, video, data, and mobile applications for collaboration. One recent report looking at recent market research by Grand View Research on the sector notes it is poised for enormous growth in the next few years.

According to the study, Unified Communications will be a $75.81 billion market in 2020. This new study on Global UC Market offers an analysis of the two segments of the market-products and applications. Products are divided into two categories On Premise, and Cloud-Bases/Hosted, while applications are divided into the categories of education, enterprises, government and healthcare.

The application segment will account for the largest share in the global UC market, the report predicted. The early adopters that have implemented UC solutions, have now come to reap benefits from their investments. UC solutions not only enable enterprises to improve operational efficiency, they also enable the companies to create better customer engagement. These are expected to encourage more and more organizations in healthcare, education and government to integrate their data, voice, video and other communication applications.

Organizations in the government sector in particular will increase their investments in UC implementation to support operational continuity, emergency response, as well as situational awareness. This in turn will necessitate the deployment of necessary IP infrastructure in support of unified communication.

Bring-your- own-device (BYOD) initiatives by large enterprises as well as SMB are going to be one of the major deciding factors in UC growth. The implementation of BYOD involves costly investments. Then interoperability across various unified communication platforms must be established for the successful implementation of BYOD. These two factors may impede market growth, the study pointed out.

Interestingly, hosted unified communications solutions are likely to overtake their on premise siblings in popularity. The reasons are obvious: the installation and maintenance costs associated with hosted solutions are lot cheaper than the premise-based UC solutions.

The global UC market involves a few major challenges, relating to investment and interoperability and exposure to security risk. The study predicts that these are not going to stop key vendors from battling against the challenges. With the UC space becoming increasingly competitive, vendors are likely to come up with new, innovative solutions to gain a competitive edge. For the companies that survive the obstacles and challenges, there awaits a strong payoff, the study concluded.

Thanks to Unifed Communications for the article.

Application Monitoring Is Not Application Performance Monitoring (APM)

Application Performance MonitoringThere is a common issue I deal with when speaking to end users trying to monitor applications. This confusion is partially created by vendors who would like to position themselves in the hot APM market, yet they clearly don’t enable performance monitoring. These vendors are slowly starting to correct the messaging, but many have poor market understanding, and continue to confuse buyers.

There are two types of monitoring technologies, one is availability monitoring and the other performance monitoring. Before embarking on a performance monitoring journey (this applied to both application performance monitoring and network performance monitoring) there should be a good foundation of availability monitoring. Availability monitoring is the easier of the two, it’s inexpensive, effective in catching major issues, and should be the staple of any monitoring initiative. We recommend unified monitoring tools (See Post : http://blogs.gartner.com/jonah-kowall/2013/11/12/unified-monitoring-note-presentation-and-client-interest/) to handle availability monitoring across technologies with a single offering.

When looking at server monitoring tools, they do more than monitor the server and OS components, but also handle the collection of data from instances of applications on the OS instances. The data collected includes metrics and often times log data which shows major issues in application availability or health. This is often what people are looking for, and many vendors call these requirements “APM”, but that’s incorrect. We call this server monitoring and/or application instance monitoring. These are availability tools and not performance monitoring tools.

The area APM tools differ from server monitoring tools is in multiple ways. APM tools live within the application and provide end user experience data from the user through the distributed application components. They are able to monitor and trace transactions across tiers of the application. Similarly other tools which monitor application performance can reside on the network, while these don’t have the level of granularity when tracing transactions and getting internals of applications they certainly can detect performance deviations of application components and often times can handle other application technologies.

Hopefully this helps clear things up, and please reply here or contact me on Twitter @jkowall

Thanks to Gartner for the article.

IQ Services Helps Industry-Leading Technology Company When Their Internal Monitoring Isn’t Enough

The Company

A worldwide technology company operating in more than 170 countries is focused on helping people apply technology in meaningful ways to their businesses, personal lives and communities. With R&D investments of almost a billion US dollars, the company is able to offer a complete technology product portfolio to the market. The ultimate goal of this market leader is to offer products, services and solutions that are high tech, low cost and deliver the best customer experience.

The company needed a robust solution to handle automated telephone transactions to a product support line. The solution had to deliver enhanced service at lower costs. So the company turned to industry leaders in enterprise communication systems for the answer. The answer turned out to be a hosted IVR environment blending best of breed technologies from throughout the industry.

The Problem

Within less than a month after cutover, the company experienced outages that went undetected by traditional rigorous internal monitoring processes. Careful investigation indicated one of the servers was locking up and callers were periodically encountering delayed answer and sometimes receiving dead air after dialing into the product support line. The symptoms were easy to correct, but only after problems were discovered and reported.

The root cause was more elusive. Needless to say, the company was extremely concerned about the impact of its customers not having immediate access to its support line. Not only was the issue causing problems for the folks in the live contact center, but it was creating less than acceptable end-user customer experiences. Because internal monitoring was not detecting the problem, no one had information to go on to resolve the fundamental issues.

That is when the company looked to IQ Services and HeartBeat™ outside-in remote availability and performance monitoring services to identify customer affecting issues.

The Solution

Heartbeat- IVR Monitoring from IQ ServicesIQ Services’ HeartBeat™ remote outside-in monitoring services offer a remote method for monitoring and measuring the performance of contact center and communications technologies 24 hours a day, 7 days a week. By generating automated transactions (telephone calls, browser sessions, etc.) that access and exercise communications solutions just like customers – from the outside-in, customer perspective – IQ Services gathers insightful data about the customer experience and identifies issues that can easily be missed or under-reported by traditional, internal monitoring methods, no matter how rigorous or extensive. In addition, the HeartBeat™ traffic serves as a control sample so boiled down metrics, internal monitoring and anecdotal reports from end-users can be put into context.

When IQ Services detects an issue, the right people are immediately notified so appropriate action can be taken. Complete audio recordings of suspicious or erroneous calls speed up issue identification and resolution. Online results give everyone who needs to know access to timely, actionable information as well as performance trend data to help optimize performance of the solution over time. Remote availability and performance monitoring conducted around the clock is the simplest, most cost-effective way to make sure problems are identified quickly and everyone knows what customers are really experiencing.

Successful Outcome

Within less than 24-hours of contacting IQ Services, a proof of concept HeartBeat™ service was up and running to demonstrate that outside-in monitoring could help solve the problem. Within less than 2 days of receiving permission to activate the full service, HeartBeat™ remote availability and performance monitoring had detected and reported over 35 unexpected conditions.

Upon receiving immediate notification of these conditions, the company utilized online HeartBeat™ results – including complete audio recordings – to identify issues which had escaped internal monitoring and were negatively impacting customers. By identifying and resolving the issues quickly, end-user customers were able to successfully access and use the new interactive solution to obtain valuable information and support.

In Summary

When a world leading technology company implemented a new hosted product support line application based on best in class technologies, its primary goal was to deliver better customer service. When elements of the integrated solution locked up and internal monitoring did not provide any insight, the company turned to IQ Services for assistance. With HeartBeat™ monitoring, IQ Services was able to quickly ramp up outside-in, end-to-end transaction activity. The culprit issues were quickly caught, documented, reported and fixed. The technology company was again able to deliver the desired customer experience at a lower cost, which is why the investment in the new product line support solution was made in the first place.

Every company should be concerned about the gap between internal monitoring and actual end-to-end performance and customer experience. HeartBeat™ outside-in monitoring is the right bridge to fill that gap.

Thanks to IQ Services for the article.