Tracking the Evolution of UC Technology

Tracking the Evolution of UC TechnologyDefining unified communications is more complicated than it seems, but a thorough understanding of UC technology is required before informed buying decisions can be made. Not only is the UC value proposition difficult to articulate, but it involves multiple decisions that impact both the IT group and end users.

In brief, UC is a platform that seamlessly integrates communications applications across multiple modes — such as voice, data and video — and delivers a consistent end-user experience across various networks and endpoints. While this describes UC’s technical capabilities, its business value is enabling collaboration, improving personal productivity and streamlining business processes.

At face value, this is a compelling value proposition, but UC offerings are not standardized and are constantly evolving. All vendors have similar core features involving telephony and conferencing, but their overall UC offerings vary widely with new capabilities added regularly.

No true precedent exists to mirror UC technology, which is still a fledgling service. The phone system, however, may be the closest comparison — a point reinforced by the fact that the leading UC vendors are telecom vendors.

But while telephony is a static technology, UC is fluid and may never become a finished product like an IP PBX. As such, to properly understand UC, businesses must abandon telecom-centric thinking and view UC as a new model for supporting all modes of communication.

UC technology blends telephony, collaboration, cloud

UC emerged from the features and limitations of legacy technology. Prior to VoIP, phone systems operated independently, running over a dedicated voice network. Using packet-switched technology, VoIP allowed voice to run on the LAN, sharing a common connection with other communications applications.

For the first time, telephony could be integrated with other modes, and this gave rise to unified messaging. This evolution was viewed as a major step forward by creating a common inbox where employees could monitor all modes of communications.

UC took this development further by allowing employees to work with all available modes of communication in real time. Rather than just retrieve messages in one place, employees can use UC technology to conference with others on the fly, share information and manage workflows — all from one screen. Regardless of how many applications a UC service supports, a key value driver is employees can work across different modes from various locations with many types of devices.

Today’s UC offerings cover a wide spectrum, so businesses need a clear set of objectives. In most cases, VoIP is already being used, and UC presents an opportunity to get more value from voice technology.

To derive that value, the spectrum of UC needs to be understood in two ways. First, think of UC as a communications service rather than a telephony service. VoIP will have more value as part of UC by embedding voice into other business applications and processes and not just serving as a telephony system. In this context, UC’s value is enabling new opportunities for richer communication rather than just being another platform for telephony.

Secondly, the UC spectrum enables both communication and collaboration. Most forms of everyday communication are one on one, and UC makes this easier by providing a common interface so users don’t have to switch applications to use multiple modes of communication. Collaboration takes this communication to another level when teams are involved.

A major inhibitor of group productivity has long been the difficulty of organizing and managing a meeting. UC removes these barriers and makes the collaboration process easier and more effective.

Finally, the spectrum of UC is defined by the deployment model. Initially, UC technology was premises-based because it was largely an extension of an enterprise’s on-location phone system. But as the cloud has gained prominence, UC vendors have developed hosted UC services — and this is quickly becoming their model of choice.

Most businesses, however, aren’t ready for a full-scale cloud deployment and are favoring a hybrid model where some elements remain on-premises while others are hosted. As such, UC vendors are trying to support the market with a range of deployment models — premises-based, hosted and hybrid.

How vendors sell UC technology

Since UC is not standardized, vendors sell it in different ways. Depending on the need, UC can be sold as a complete service that includes telephony. In other cases, the phone system is already in place, and UC is deployed as the overriding service with telephony attached. Most UC vendors are also providers of phone systems, so for them, integrating these elements is part of the value proposition.

These vendors, however, are not the only option for businesses. As cloud-based UC platforms mature, the telephony pedigree of a vendor becomes less critical.

Increasingly, service providers are offering hosted UC services under their own brand. Most providers cannot develop their own UC platforms, so they partner with others. Some providers partner with telecom vendors to use their UC platforms, but there is also a well-established cadre of third-party vendors with UC platforms developed specifically for carriers.

Regardless of who provides the platform, deploying UC is complex and usually beyond the capabilities of IT.

Most UC services are sold through channels rather than directly to the business. In this case, value-added resellers, systems integrators and telecom consultants play a key role, as they have expertise on both sides of the sale. They know the UC landscape, and this knowledge helps determine which vendor or service is right for the business and its IT environment. UC providers tend to have more success when selling through these channels.

Why businesses deploy UC services

On a basic level, businesses deploy UC because their phone systems aren’t delivering the value they used to. Telephony can be inefficient, as many calls end up in voicemail, and users waste a lot of time managing messages. For this reason, text-based modes such as chat and messaging are gaining favor, as is the general shift from fixed line to mobile options for voice.

Today, telephony is just one of many communication modes, and businesses are starting to see the value of UC technology as a way to integrate these modes into a singular environment.

The main modes of communication now are Web-based and mobile, and UC provides a platform to incorporate these with the more conventional modes of telephony. Intuitively, this is a better approach than leaving everyone to fend for themselves to make use of these tools. But the UC value proposition is still difficult to express.

UC is a productivity enabler — and that’s the strongest way to build a business case. However, productivity is difficult to measure, and this is a major challenge facing UC vendors. When deployed effectively, UC technology makes for shorter meetings, more efficient decisions, fewer errors and lower communication costs, among other benefits.

All businesses want these outcomes, but very few have metrics in place to gauge UC’s return on investment. Throughout the rest of this series, we will examine the most common use cases for UC adoption and explore the major criteria to consider when purchasing a UC product.

Thanks to Unified Communications for the article. 

 

Advertisements

Virtualization Visibility

Virtualization Visibility

See Virtual with the Clarity of Physical

The cost-saving shift to virtualization has challenged network teams to maintain accurate views. While application performance is often the first casualty when visibility is reduced, the right solution can match and in some cases even exceed the capabilities of traditional monitoring strategies.

Virtual Eyes

Network teams are the de facto “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around all virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts may be offset by sub-par application performance.

Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources, including host, hypervisor, and virtual switch (vSwitch) along with perimeter client, and application traffic.

In addition, unique communication technologies like VXLAN, and Cisco FabricPath must be supported for full visibility into the traffic in these environments. Without this support, network analyzers cannot gain comprehensive views into virtual data center (VDC) traffic.

Step One: Get Status of Host and Virtualization Components

The host, hypervisor, and vSwitch are the foundation of the entire virtualization effort so their health is crucial. Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status are examples of accessible data. Often, these parameters can point to the root cause of service issues that may otherwise manifest themselves indirectly.

Virtualization Visibility

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Next Steps

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user.

To learn more about how your team can achieve the same visibility in virtualized environments as you do in physical environments, download the complete 3 Steps to Server Virtualization Visibility White Paper now.

Thanks to Viavi Solutions for the article.

Is Network Function Virtualization (NFV) Ready to Deliver?

Is Network Function Virtualization (NFV) Ready to Deliver?

There is no doubt that virtualization is one of the hottest technology topics with communication service providers (CSPs) today. Nearly all the forecasts suggest that widespread NFV adoption will happen over the next few years, with CSPs benefitting from significantly reduced operational costs and much higher revenues resulting from increased service flexibility and velocity. So much for the hype – but where do NFV standards, guidelines and technology implementations stand today and when will the promised benefits be fully realized.

“Nearly all the forecasts suggest that widespread NFV adoption will happen over the next few years, with content service providers benefitting from significantly reduced operational costs and much higher revenues resulting from increased service flexibility and velocity.” – Ronnie Neil, JDSU

All analysts and CSPs agree that the introduction of virtualization will happen in phases. Exactly what the phases will be does vary from forecast to forecast, but a relatively common and simple model details the following three phases:

The financial benefits of virtualization will incrementally grow as each stage is reached with the full benefits not realized until stage 3 is reached. So where are we today in this NFV evolution?

  • Islands of specific network functions with no-to-little service chaining and manual configuration.
  • Either islands of specific network functions with dynamic self-configuration, or introduction of service chaining, but again employing manual configuration.
  • Finally, service chaining coupled with dynamic self-configuration functionality.

Phase 1 is already happening with some early commercial deployments of stand-alone virtualized network functions. hese deployments include virtualized functions of customer premise equipment (CPE), for example gateways and firewalls, and evolved packet core (EPC) components, such as HLRs and MMEs, these functions lending themselves to virtualization due to their software-only architectures. But generally speaking this is as far as commercial NFV deployments have reached in their evolution, with phases 2 and 3 still some way off. One of the main reasons for this is that these latter phases introduce major new requirements for the management tools associated with network virtualization.

And it is only recently that industry efforts to define standards, guidelines and best practices for the management and orchestration of NFV (or MANO as it is referred to) are starting. The emphasis up until now within research forums has been to focus on the basics of delivering the network function virtualization itself.

The TM Forum Zero-touch Operation, Orchestration, and Management (ZOOM) program is one of the foremost industry forums focused on the MANO aspects of virtualization. At this year’s TM Forum Live! event (Nice, France, June 1-4), the following two ZOOM-related catalyst projects will demonstrate aspects of MANO associated with NFV dynamic self-configuration.

  • Maximizing Profitability with Network Functions Virtualization
  • Operations Transformation and Simplifications Enabled by Virtual CPE

Thanks to Viavi Solutions for the article.

Cloud, Virtualization Solution – Example of Innovation

Our team is excited to represent Viavi Solutions during an industy (IT and cloud-focused) event, VMworld, in San Francisco at booth #2235. We’ll be showcasing our latest innovation – the GigaStor Software Edition designed for managing performance in virtual, cloud, and remote environments.

Here are some topline thoughts about why this product matters for our customers and core technologies trending today, what a great time it is for the industry and to be Viavi!

For starters, the solution is able to deliver quick and accurate troubleshooting and assurance in next generation network architecture. As networks become virtualized and automated through SDN initiatives, performance monitoring tools need to evolve or network teams risk losing complete visibility into user experience and missing performance problems. With GigaStor Software, engineers have real-time insight to assess user experience in these environments, and proactively identify application problems before they impact the user.

Cloud, Virtualization Solution - Example of Innovation

GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics …leading to quick resolution.”

With the explosion of online applications and mobile devices, the role of cloud and virtualization will increase in importance, along with the need for enterprises and services providers need to guarantee around-the-clock availability or risk losing customers. With downtime costing companies $300K per hour or $5,600/minute, the solution that solves the problem the fastest will get the business. Walking the show floor at VMworld, IT engineers will be looking for solutions like GigaStor Software that help ensure quality network and services, as well as speed and accuracy when enabling advanced networks for their customers.

And, what a great time to be Viavi Solutions! Our focus on achieving visibility regardless of the environment and delivering real-time actionable insights in a cost-effective solution means our customers are going to be able to guarantee high levels of service and meet customer expectations without breaking the bank. GigaStor Software Edition helps engineers troubleshoot with confidence in virtual and cloud environments by having all the traffic retained for resolving any challenge and expert analytics that lead to quick resolution.

Thanks to Viavi Solutions for the article.

Do You Have a Network Operations Center Strategy?

Do You Have a Network Operations Center Strategy?The working definition of a Network Operations Center (NOC) varies with each customer we talk with; however, the one point which remains unified is that the NOC should be the main point of visibility for key functions that combine to provide business services.

The level at which a NOC ‘product’ is interactive depends on individual customer goals and requirements. Major equipment vendors trying to increase revenue are delving into management and visibility solutions with acquisitions and mergers, and while their products may provide many good features; those features are focused on their own product lines. In mixed vendor environments this becomes challenging and expensive, if you have to increase the number of visibility islands.

One trend we have seen emerging is the desire for consolidation and simplification within the Operations Centre. In many cases our customers may have the information required to understand the root cause but, getting to that information quickly is a major challenge across multiple standalone tools. Let’s face it, there will never be one single solution that will fulfill absolutely all monitoring and business requirements, and having specialized tools is likely necessary.

The balance lies in finding a powerful, yet flexible solution; one that not only offers a solid core functionality and feature set, but also encourages the orchestration of niche tools. A NOC tool should provide a common point of visibility if you want to quickly identify which business service is affected; easily determine the root cause of that problem, and take measures to correct the problem. Promoting integration with existing business systems, such as CMDB and Helpdesk, both northbound and southbound, will ultimately expand the breadth of what you can accomplish within your overall business delivery strategy. Automated intelligent problem resolution, equipment provisioning, and Change and Configuration Management at the NOC level should also be considered as part of this strategy.

Many proven efficiencies are exposed when you fully explore tool consolidation with a goal of eliminating overlapping technologies and process related bottlenecks, or duplication. While internal tool review often brings forth resistance, it is necessary, and the end result can be enlightening from both a financial and a process aspect. Significant cost savings are easily achieved with fewer maintenance contracts, but with automation a large percent of the non-value adding activities of network engineers can be automated within a product, freeing network engineers to work on proactive new innovations and concepts.

b2ap3_thumbnail_Do_You_Have_a_NOC_Strategy_2.jpgThe ‘Dark Side’

Forward thinking companies are deploying innovative products which allow them to move towards unmanned Network Operations Center, or ‘Dark NOC’. Factors such as energy consumption, bricks and mortar costs, and other increasing operational expenditures strengthen the fact that their NOC may be located anywhere with a network connection and still provide full monitoring and visibility. Next generation tools are no longer a nice to have, but a reality in today’s dynamic environment! What is your strategy?

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Viavi Solutions Launches GigaStor Software Edition for Virtual and Cloud Environments

Solution Delivers Fast and Accurate Troubleshooting and Assurance in Next Generation Network Architecture

(NASDAQ: VIAV) Viavi Solutions Inc. (“Viavi”) today announced it is expanding its portfolio of software-defined network test and monitoring solutions with the new GigaStor Software Edition to manage performance and user experience in virtual and cloud environments. The new software configurations, which Viavi is demonstrating at VMworld, allow network and server teams to capture and save 250 GB or 1 TB of continuous traffic to disk for in-depth performance and forensic analysis.

“IT teams are wasting a lot of time by only tracking virtual server and resource health,” said Charles Thompson, senior director of product management, Viavi Solutions. “These teams can often miss problems associated with applications within the hypervisor with such narrow vision. With GigaStor Software engineers now have the ability to see in real time and historically how users are experiencing applications and services within the virtual environment, saving time and end-user heartache.”

Without GigaStor’s insight, engineers could spend hours replicating a network error before they can diagnose its cause. GigaStor Software captures packet-data from within the virtual switching infrastructure without needing to push data into the physical environment. It can be deployed in any virtual host for the long-term collection and saving of packet-level data, which it can decode, analyze, and display. Additionally, it provides IT teams with greater accuracy and speed in troubleshooting by having all packets available for immediate analysis.

Utilizing the GigaStor Software and appliances, network teams can monitor and analyze all virtual datacenter traffic whether within a VMware ESX host or on 10 and 40 Gigabit Ethernet links. GigaStor Software is available today for purchase, and is being demonstrated during VMworld in San Francisco at Viavi Solutions booth #2235.

Thanks to Viavi for the article.

New GigaStor Portable 5x Faster

New GigaStor Portable 5x Faster

Set up a Mobile Forensics Unit Anywhere

On June 22, Network Instruments announced the launch of its new GigaStor Portable 10 Gb Wire Speed retrospective network analysis (RNA) appliance. The new portable configuration utilizes solid state drive (SSD) technology to stream traffic to disk at full line rate on full-duplex 10 Gb links without dropping packets.

“For network engineers, remotely troubleshooting high-speed networks used to mean leaving powerful RNA tools behind, and relying on a software sniffer and laptop to capture and diagnose problems,” said Charles Thompson, chief technology officer for Network Instruments. “The new GigaStor Portable enables enterprises and service providers with faster links to accurately and quickly resolve issues by having all the packets available for immediate analysis. Additionally, teams can save time and money by minimizing repeat offsite visits and remotely accessing the appliance.”

Quickly Troubleshoot Remote Problems

Without GigaStor Portable’s insight, engineers and security teams may spend hours replicating a network error or researching a potential attack before they can diagnose its cause. GigaStor Portable can be deployed to any remote location to collect and save weeks of packet-level data, which it can decode, analyze, and display. The appliance quickly sifts through data, isolates incidents, and provides extensive expert analysis to resolve issues.

Part of the powerful Observer Performance Management Platform, the GigaStor Portable 10 Gb Wire Speed with SSD provides 6 TB of raw storage capacity, and includes the cabling and nTAP needed to install the appliance on any 10 Gb network and start recording traffic right away.

Forensic capabilities are an important part of any network management solution. Learn more about GigaStor Portable and how RNA can help protect the integrity of your data.

Thanks to Network Instruments for the article.