Impact of UC as a Service in the Business World

Unified-Communications-ICONUnified Communications (UC) is the integration of real-time and non-real time communication to provide instant and on-demand services to customers. During the last few years, UC has seen enormous growth due to the increasing use of UC by businesses around the world. A report by Frost & Sullivan (News – Alert) shows that UC is likely to grow at an annual rate of 32.7 percent per year through 2017 in North America alone. It is being used for many applications such as hosted services, voice calls and other applications.

In the light of this growth, some companies are evaluating whether they need UC for their business at all. The answer is a resounding yes, because UC brings many benefits to companies. Firstly, UC streamlines all communication channels within an organization. Without UC, businesses have to employ people around the clock to interact with customers over the phone or face to face. This is time-consuming and effort-centric and in most companies, it is impossible to implement. The difference in time zones also further complicates the communication process. This is why it is important to have a unified communication model where all the external communications with clients is streamlined.

Statistics by business2community.com show that 73 percent of business related calls go to voice mail. If the mailbox is full, then it is the business that is affected by the inefficiency because customers will tend to move on to competitors. To avoid such shifts in customer loyalty and to strengthen the relationship with customers, it is important for businesses to have a sound UC model.

It is equally important to choose the right service from a service provider who is willing to offer UC as a service. Today, many companies in the UC industry offer only the hardware or the infrastructure and not UC as a service. Going forward, every business is likely to use a combination of UC models because of the flexibility and convenience that comes with it. For this reason, businesses should enter into agreements with service providers who offer it as a service.

Thanks to Unified Communications for the article.

How Unified Communications Can Perk Up Business Efficiency

While most businesses are focused on the bottom line — perhaps the purest of all measures when it comes to a business’ success — somewhat fewer take a look at efficiency, which has the potential to yield bottom-line results on a scale that can be downright impressive. For those looking for gains in efficiency, though, one great place to look is in the field of unified communications, which can take communications systems to the next level, and make the whole affair much more efficient in the process.

It may be hard to imagine how communications systems can be made more efficient; such systems do a job, and one job in particular. The telephone conveys voice traffic in two directions. Email does something similar with text, as does the fax, but in different formats. Video conferencing systems convey voice and video traffic in two or more directions, depending on the circumstances of the meeting. This seems to be at direct odds with efficiency measures, which can require things like improving customer feedback or making certain processes or equipment items are standard across an organization.

But when unified communications are brought into play, there can come along a boost to efficiency. For instance, businesses with improved communications systems can take better advantage of the freelance market, which can improve with different time zones, different market specialties and the like, allowing more work to be done in the same time via greater expertise or by taking advantage of periods that would normally be down time. Plus, managing the freelancers in question can be simpler by opening up better methods of group contact, like group email or conferencing, be it voice or video conferencing. This doesn’t need to be limited to freelancers, either, as regular employees can be managed in such a fashion as well, and open up new options of better mobility and improved morale as employees can do the job in more of a personally-amenable fashion.

But what does this have to do with unified communications? With unified communications, much as the name denotes, the communications measures a business employs have been brought together into one more centralized whole, meaning users need only look in one direction for communications. Removing separate versions, reducing redundancy, managing waste…these all come into play when bringing up a unified communications system over a largely disparate, dis-unified version. But when everything is together, in the same place, it can become more efficient. There’s only one system to learn for that particular function, not one guy using Skype while another guy is using something completely different. That makes administration simpler, it makes IT’s job a little simpler, and that means better efficiency as more jobs can be done in the same amount of time. Purchasing becomes simpler as well, particularly when it comes to any applicable software licenses.

Improving the simplicity of systems tends to improve efficiency, if for no other reason than allowing more things to be dealt with in the same amount of time. By way of comparison, in the time it takes to replace an engine piston in a car, a dozen cars or more can have an oil change. Simpler jobs go faster, and more can be done in the same time. That’s one of the purest definitions of efficiency there is, and unified communications can drive significant improvement in same.

Thanks to Unified Communications for the article.

VoIP Performance Testing Fundamentals

VoIP network performance testing can mean the difference between a VoIP system working at a high level QoS and a weak system that runs so poorly customers could take their business elsewhere. This guide discusses why it is important to run regular performance testing and some of the ways it can be done.

How can virtual network test beds ensure VoIP performance?

Voice over IP (VoIP) technology offers a wide range of benefits — including reduction of telecom costs, management of one network instead of two, simplified provisioning of services to remote locations, and the ability to deploy a new generation of converged applications. But no business can afford to have its voice services compromised. Revenue, relationships and reputation all depend on people being able to speak to each other on the phone with five 9’s reliability.

Thus, every company pursuing the benefits of VoIP must take steps to ensure that their converged network delivers acceptable call quality and non-stop availability.

A virtual network test bed is particularly useful for taking risk out of both initial VoIP deployment and long-term VoIP ownership. Essentially, such a test bed enables application developers, QA specialists, network managers and other IT staff to observe and analyze the behavior of network applications in a lab environment that accurately emulates conditions on the current and/or planned production network. This emulation should encompass all relevant attributes of the network, including:

  • All network links and their impairments, such as: physical distance and associated latency, bandwidth, jitter, packet loss, CIR, QoS, MPLS classification schemes, etc.,
  • the number and distribution of end users at each remote location and
  • application traffic loads.

This kind of test bed is indispensable for modeling the performance of VoIP in the production environment, validating vendor claims, comparing alternative solutions, experimenting with proposed network enhancements, and actually experiencing the call quality that the planned VoIP implementation will deliver.

Following are seven best practices for applying virtual network test bed technology to both initial VoIP deployment and ongoing VoIP management challenges:

1. Capture conditions on the network to define best-case, average-case and worst-case scenarios
Conditions in a test lab won’t reflect conditions in the real-world environment if they are not based on empirical input. That’s why successful VoIP adopters record conditions on the production network over an extended period of time and then play back those conditions in the lab to define best-, average-, and worst-case scenarios. By assessing VoIP performance under these various scenarios, project teams can readily discover any problems that threaten call quality.

2. Use the virtual network to run VoIP services in the testing lab under those real-world scenarios
Once the network’s best-, average-, and worst-case scenarios have been replicated in the test environment, the project team can begin the process of VoIP testing by running voice traffic between every set of endpoints. This can be done by actually connecting phones to the test bed. Call generation tools can also be used to emulate projected call volumes.

3. Analyze call quality with technical metrics
Once VoIP traffic is running in an accurately emulated virtual environment, the team can apply metrics such as mean opinion score (MOS) to pinpoint any specific places or times where voice quality is unacceptable. Typically, these trouble spots will be associated with observable network impairments — such as delay, jitter and packet loss — which can then be addressed with appropriate remedies.

4. Validate call quality by listening to live calls
Technical metrics alone can be misleading, since the perception of call quality by actual end users is the ultimate test of VoIP success. So the virtual environment should be used to enable the team to validate firsthand the audio quality on calls between any two points on the network under all projected network conditions. Again, a call generator can be used so that testers can act as the “nth” caller at any location.

5. Repeat as necessary to validate quality remedies
A major advantage of a virtual environment is that various fixes can be tried and tested without disrupting the production network. Testing in the virtual environment should therefore be an iterative process, so that all bugs can be fully addressed and the rollout of VoIP in the production environment can be performed with a very high degree of certainty.

6. Bring in end users for pre-deployment acceptance testing
Since voice quality is ultimately a highly subjective attribute, many VoIP implementation teams have found that it is worthwhile to bring in end users for acceptance testing prior to production rollout. This greatly reduces the chance of the dreaded VoIP mutiny syndrome, where end users balk at call quality despite the best efforts of IT and the fact that call quality meets common industry standards.

7. Continue applying the above best practices over time as part of an established change management process
To maintain VoIP quality over time, IT organizations must incorporate the above best practices into their change management practices. This is essential for ensuring that changes in the enterprise environment — the addition of new locations, the introduction of a new application onto the network, a planned relocation of staff — will not adversely impact end-to-end VoIP service levels.

It’s important to note that while a virtual network test bed will pay for itself by virtue of its support for VoIP and convergence alone, this technology has many other uses that deliver substantial ROI. These uses include the development of more network-friendly applications, better planning of server moves and data center consolidations, and improved support for merger and acquisition (M&A) activity. These significant additional benefits make emulation technology an extremely lucrative investment for IT organizations seeking both to ensure the success of a VoIP project in the near term and to optimize their overall operational excellence in the long term.

What can your manageable electronics tell you before you implement VoIP?

In a recent webcast, we discussed performance management and what to look for when you examine your statistics. One of the worst statistics you can consider as a means to determine your network health is utilization.

There are other statistics that are much more valuable. It is important to look at utilization, but this is only a small piece of health.

The problem with utilization is twofold. First, it is nearly impossible to determine when the workstation is actually in use. Even if someone is sitting at his desk, he may be on the phone and not using the network. Also, many users work locally and then save their work to the network when complete. So in utilization you have to know when the network is really in use to determine how much of the bandwidth is being consumed. Look at the following two diagrams, for instance.

Figure 1. Averages over one week

me voip 1

Figure 2. Utilization averages over one month

me voip 2

In Figure 1, above, the utilization was measured on the inbound side for a week. Figure 2 shows the same circuit measured over one month. As you can see, the differences in utilization are rather large. When planning for VoIP, you should assume that the peak happens all the time. If not, when processing becomes heavy, you will degrade your voice signal because you have not planned for it.

It is also important to examine buffer space and discards on your active electronics. Switches discard packets as a function. When the buffers get too full, they will drop packets and request a retransmission from the sender. This is not a desirable “feature” for voice. While you can set up VLANs and priority, overloaded gear will not help. In particular, you want to check your discards on any uplink port, and any port that is commonly attached (for instance, where the IP switch may be).

Some errors that you will find in your SNMP data also bear investigation. The most important are bit errors. These may be expressed as InErrors and OutErrors. Not all manageable systems will allow you to drill down further into the error state. Some will allow it, and speed up the troubleshooting process. Anytime you see these errors, the first thing you should do is test your cabling channel that is connected on that port. A brief word about cable testing: Make sure the tester has the latest revision of software and firmware and has been calibrated recently. You also want to be sure that your interfaces and/or patch cords are relatively new. Each has a limited number of mating cycles, and a channel may look bad when in fact it is not.

Next, check your duplex configurations. Duplex mismatches and/or channels that have auto-negotiated to half duplex will further limit your operations. It is important to have full duplex links. A hard setting in either the switch or at the workstation, or faulty cabling, including channels that exceed the maximum distance, can cause half duplex links.

After you fix your errors, you will want to take another network pulse for 30 days. The reason that I recommend a 30-day window is to allow for such things as month-end processing and other functions that do not happen on a daily basis. A Certified Infrastructure Auditor can assist with all of these steps. For more information on specific errors, see the article Common network errors and causes.

How can one test VoIP functionality with their existing PBX or Key system?

There are multiple possibilities for testing VoIP functionality with an existing PBX or Key system. How you test depends upon your goal.

If you have two sites linked together with PBX tie lines and you want to try using VoIP so that calls will be routed over your internal network rather than costly tie lines, you can test using a SIP to PSTN gateway (such as the MX25.)

This configuration could look like this:

Existing PBX ← T1 PRI → MX25 ← SIP over WAN network → MX25 ← T1 PRI → Existing PBX

Perhaps you have a single site and you want to keep your existing PBX and connect long distance calls through an Internet telephony service provider that provides superior rates. In this case, you could use a SIP to PSTN gateway and connect in this fashion:

Existing PBX ← T1 PRI → MX25 ← SIP over Internet → ITSP →

Perhaps you are planning on replacing your legacy PBX and putting in an IP PBX (such as the MX250) to test the functionality before cutting over service. In this case, the configuration could look like this:

Existing PBX ← T1 PRI → MX250 ← T1 PRI → PSTN

Using this approach, the existing PBX continues to function as it always has and only dial plan entries are required to route calls between systems. This allows for certain employees to learn the new VoIP system and understand its features before migrating over service.

When should a VoIP system be analyzed and with what tools?

We have recently implemented a VoIP network with separate VLANs and QoS. It all seemed to be working fine when it first went in, but recently, certain people have been complaining about sound breakup whilst talking to customers on the phone. I have also had similar problems, but thought it was due to the amount of diagnostics software that I was running on my PC.

To check, I moved my phone into its own port and the breakup is still there. Any ideas how we can check to make sure that the network is doing alright? Also are there any software utilities that would help us with day to day analyzing?

First and foremost — I would suggest that you have someone come and test your cabling channels. That will be the least expensive and could be the most worrisome component. Even if the channels tested fine when first installed, they can degrade over time with moves, adds and changes.

The other thing you didn’t mention was if this occurs only on the intra-office calls or only on outside calls. If it is only on outside calls, you may want to get your carrier to check your circuits.

If these things test out okay, then you will want a RMON tool that can track performance. Check your switch SNMP data for errors. These will also give you a good idea of what the culprit may be. If this is happening to everyone in the building — start looking for common denominators such as network interface cards in the switch.

Thanks to TechTarget for the article.

Social Media and Security – Are They Mutually Exclusive?

Social media has become a major talking point in many organizations, and for good reason. There are plenty of horror stories around the phenomenon and the risks have been widely discussed. They include the possibility of introducing malware via third-party applications, security issues resulting from information leaks, legal concerns over issues such as bullying, discrimination and stalking, and damage to corporate reputations as a result of employees’ postings.

There are less obvious risks too. For example, it’s likely that, even in companies where a ban is in force, managers are in a position both to flout it and to reveal company secrets.

What can you do? Banning access is the obvious, knee-jerk response, but it’s not as simple as that – nor is it in most cases even possible. The number of devices to which people now have access means that it’s simply not possible to ban Facebook et al, even if it were desirable. And fears that people will waste company time on Facebook rather than working is likely to be more of a management issue: if some people aren’t motivated to do their jobs, then banning Facebook is likely to drive them into finding something else with which to occupy their time.

Instead, the answer is to embrace it – cautiously. There are departments, such as marketing, which absolutely need access to social media. This is the first opportunity to be grasped. Your customers are likely to be using social media too, so this is an ideal opportunity to make connections, promote the company’s name and products, and learn more quickly what customers are thinking, which in turn can provide a competitive edge.

What’s needed to back this up is a social media policy. This should state clearly what the purpose of the policy is – for example, to promote the company and its products and services – and to explain under what circumstances using company time and equipment to access social media sites is acceptable.

More tricky is what employees can and can’t say about the company when they’re not at work. Here it’s a good idea to be explicit about the things that people clearly shouldn’t be saying about the company and other employees – such as being defamatory, discriminatory, obscene and so on – that they shouldn’t disclose confidential or proprietary information, and that, when they mention the company online, they must disclose their relationship with the company.

Ultimately, you need to rely on the common sense of your employees, and to remind them that internet postings endure, and that they must bear that in mind when posting.

Thanks to NetIq for the article.

Secure cloud-to-cloud migration essential to poaching cloud customers

The cloud services market is steadily maturing and becoming increasingly competitive, but customer lock-in is still common. To take customers from competitors, cloud providers must offer secure cloud-to-cloud migration.

Providers that lead with security and transparency features may not only differentiate themselves in the crowded market, but could also poach customers that are unhappy with their current provider.

Cloud-to-cloud migration: Security fears lock in customers

Enterprises often develop a comprehensive cloud migration strategy when they first do business with cloud providers, but they typically lack a plan for leaving that provider if they are unhappy with the service, said Ed Moyle, founding partner of the analyst firm Security Curve.

“In practice, this means there’s a challenge associated with moving services out of a given cloud environment … or [moving to] another provider,” he said.

Providers who engage with unhappy customers of the competition should lead with security — a concern for every cloud customer — and then move into more differentiated offerings that will target a specific segment of customers, said Geoff Webb, director of solution strategy for NetIQ, a Houston-based provider of Disaster Recovery as a Service, security and workload management software for enterprises and cloud service providers.

Securing data at rest is different than securing data as it is transmitted to a provider’s environment or between cloud providers. While on the move, data loss is also a concern.

When switching to a new provider or adopting a multi-cloud strategy, customers are worried about where their data might end up, Webb said. “While a service-level agreement [SLA] is great, more visibility can unlock the potential for adoption of a [provider’s] services if the provider can address those concerns early.”

NetIQ offers software-based security add-on services to help providers differentiate their Infrastructure as a Service offerings. “NetIQ software gives providers the ability to be very specific about where customer data is being held … and makes it very easy for providers to onboard and ramp-up their customers,” said Mike Robinson, senior solution marketing manager for NetIQ.

“The way providers can really compete in the cloud market now is by letting customers move into the cloud in the way they want to,” NetIQ’s Robinson said.

Cloud-to-cloud migration: Give customers the freedom to roam

Secure cloud-to-cloud migration is a good appetizer, but customers want a comprehensive security strategy, too, said Mike Chapple, senior director of enterprise support services at the University of Notre Dame in South Bend, Ind.

“Securing data moving between providers is important, but it’s only a short period of time,” he said. “Users want to know that the cloud provider they are switching to is secure and compliant throughout the entire amount of time they will be maintaining their data or running the [provider’s service].”

Even though data exchange to a new cloud provider may not necessarily be any more risky than a customer’s initial move to the cloud, secure cloud-to-cloud migration is still a chance for a provider to differentiate.

The provider should also emphasize open standards so customers can limit the amount of re-engineering they have to do to their applications after a cloud-to-cloud migration. These open standards will make it easier for customers to shop around, so providers will have to stay vigilant, Security Curve’s Moyle said.

Thanks to Tech Target for the article.