VoIP Performance Testing Fundamentals

VoIP network performance testing can mean the difference between a VoIP system working at a high level QoS and a weak system that runs so poorly customers could take their business elsewhere. This guide discusses why it is important to run regular performance testing and some of the ways it can be done.

How can virtual network test beds ensure VoIP performance?

Voice over IP (VoIP) technology offers a wide range of benefits — including reduction of telecom costs, management of one network instead of two, simplified provisioning of services to remote locations, and the ability to deploy a new generation of converged applications. But no business can afford to have its voice services compromised. Revenue, relationships and reputation all depend on people being able to speak to each other on the phone with five 9’s reliability.

Thus, every company pursuing the benefits of VoIP must take steps to ensure that their converged network delivers acceptable call quality and non-stop availability.

A virtual network test bed is particularly useful for taking risk out of both initial VoIP deployment and long-term VoIP ownership. Essentially, such a test bed enables application developers, QA specialists, network managers and other IT staff to observe and analyze the behavior of network applications in a lab environment that accurately emulates conditions on the current and/or planned production network. This emulation should encompass all relevant attributes of the network, including:

  • All network links and their impairments, such as: physical distance and associated latency, bandwidth, jitter, packet loss, CIR, QoS, MPLS classification schemes, etc.,
  • the number and distribution of end users at each remote location and
  • application traffic loads.

This kind of test bed is indispensable for modeling the performance of VoIP in the production environment, validating vendor claims, comparing alternative solutions, experimenting with proposed network enhancements, and actually experiencing the call quality that the planned VoIP implementation will deliver.

Following are seven best practices for applying virtual network test bed technology to both initial VoIP deployment and ongoing VoIP management challenges:

1. Capture conditions on the network to define best-case, average-case and worst-case scenarios
Conditions in a test lab won’t reflect conditions in the real-world environment if they are not based on empirical input. That’s why successful VoIP adopters record conditions on the production network over an extended period of time and then play back those conditions in the lab to define best-, average-, and worst-case scenarios. By assessing VoIP performance under these various scenarios, project teams can readily discover any problems that threaten call quality.

2. Use the virtual network to run VoIP services in the testing lab under those real-world scenarios
Once the network’s best-, average-, and worst-case scenarios have been replicated in the test environment, the project team can begin the process of VoIP testing by running voice traffic between every set of endpoints. This can be done by actually connecting phones to the test bed. Call generation tools can also be used to emulate projected call volumes.

3. Analyze call quality with technical metrics
Once VoIP traffic is running in an accurately emulated virtual environment, the team can apply metrics such as mean opinion score (MOS) to pinpoint any specific places or times where voice quality is unacceptable. Typically, these trouble spots will be associated with observable network impairments — such as delay, jitter and packet loss — which can then be addressed with appropriate remedies.

4. Validate call quality by listening to live calls
Technical metrics alone can be misleading, since the perception of call quality by actual end users is the ultimate test of VoIP success. So the virtual environment should be used to enable the team to validate firsthand the audio quality on calls between any two points on the network under all projected network conditions. Again, a call generator can be used so that testers can act as the “nth” caller at any location.

5. Repeat as necessary to validate quality remedies
A major advantage of a virtual environment is that various fixes can be tried and tested without disrupting the production network. Testing in the virtual environment should therefore be an iterative process, so that all bugs can be fully addressed and the rollout of VoIP in the production environment can be performed with a very high degree of certainty.

6. Bring in end users for pre-deployment acceptance testing
Since voice quality is ultimately a highly subjective attribute, many VoIP implementation teams have found that it is worthwhile to bring in end users for acceptance testing prior to production rollout. This greatly reduces the chance of the dreaded VoIP mutiny syndrome, where end users balk at call quality despite the best efforts of IT and the fact that call quality meets common industry standards.

7. Continue applying the above best practices over time as part of an established change management process
To maintain VoIP quality over time, IT organizations must incorporate the above best practices into their change management practices. This is essential for ensuring that changes in the enterprise environment — the addition of new locations, the introduction of a new application onto the network, a planned relocation of staff — will not adversely impact end-to-end VoIP service levels.

It’s important to note that while a virtual network test bed will pay for itself by virtue of its support for VoIP and convergence alone, this technology has many other uses that deliver substantial ROI. These uses include the development of more network-friendly applications, better planning of server moves and data center consolidations, and improved support for merger and acquisition (M&A) activity. These significant additional benefits make emulation technology an extremely lucrative investment for IT organizations seeking both to ensure the success of a VoIP project in the near term and to optimize their overall operational excellence in the long term.

What can your manageable electronics tell you before you implement VoIP?

In a recent webcast, we discussed performance management and what to look for when you examine your statistics. One of the worst statistics you can consider as a means to determine your network health is utilization.

There are other statistics that are much more valuable. It is important to look at utilization, but this is only a small piece of health.

The problem with utilization is twofold. First, it is nearly impossible to determine when the workstation is actually in use. Even if someone is sitting at his desk, he may be on the phone and not using the network. Also, many users work locally and then save their work to the network when complete. So in utilization you have to know when the network is really in use to determine how much of the bandwidth is being consumed. Look at the following two diagrams, for instance.

Figure 1. Averages over one week

me voip 1

Figure 2. Utilization averages over one month

me voip 2

In Figure 1, above, the utilization was measured on the inbound side for a week. Figure 2 shows the same circuit measured over one month. As you can see, the differences in utilization are rather large. When planning for VoIP, you should assume that the peak happens all the time. If not, when processing becomes heavy, you will degrade your voice signal because you have not planned for it.

It is also important to examine buffer space and discards on your active electronics. Switches discard packets as a function. When the buffers get too full, they will drop packets and request a retransmission from the sender. This is not a desirable “feature” for voice. While you can set up VLANs and priority, overloaded gear will not help. In particular, you want to check your discards on any uplink port, and any port that is commonly attached (for instance, where the IP switch may be).

Some errors that you will find in your SNMP data also bear investigation. The most important are bit errors. These may be expressed as InErrors and OutErrors. Not all manageable systems will allow you to drill down further into the error state. Some will allow it, and speed up the troubleshooting process. Anytime you see these errors, the first thing you should do is test your cabling channel that is connected on that port. A brief word about cable testing: Make sure the tester has the latest revision of software and firmware and has been calibrated recently. You also want to be sure that your interfaces and/or patch cords are relatively new. Each has a limited number of mating cycles, and a channel may look bad when in fact it is not.

Next, check your duplex configurations. Duplex mismatches and/or channels that have auto-negotiated to half duplex will further limit your operations. It is important to have full duplex links. A hard setting in either the switch or at the workstation, or faulty cabling, including channels that exceed the maximum distance, can cause half duplex links.

After you fix your errors, you will want to take another network pulse for 30 days. The reason that I recommend a 30-day window is to allow for such things as month-end processing and other functions that do not happen on a daily basis. A Certified Infrastructure Auditor can assist with all of these steps. For more information on specific errors, see the article Common network errors and causes.

How can one test VoIP functionality with their existing PBX or Key system?

There are multiple possibilities for testing VoIP functionality with an existing PBX or Key system. How you test depends upon your goal.

If you have two sites linked together with PBX tie lines and you want to try using VoIP so that calls will be routed over your internal network rather than costly tie lines, you can test using a SIP to PSTN gateway (such as the MX25.)

This configuration could look like this:

Existing PBX ← T1 PRI → MX25 ← SIP over WAN network → MX25 ← T1 PRI → Existing PBX

Perhaps you have a single site and you want to keep your existing PBX and connect long distance calls through an Internet telephony service provider that provides superior rates. In this case, you could use a SIP to PSTN gateway and connect in this fashion:

Existing PBX ← T1 PRI → MX25 ← SIP over Internet → ITSP →

Perhaps you are planning on replacing your legacy PBX and putting in an IP PBX (such as the MX250) to test the functionality before cutting over service. In this case, the configuration could look like this:

Existing PBX ← T1 PRI → MX250 ← T1 PRI → PSTN

Using this approach, the existing PBX continues to function as it always has and only dial plan entries are required to route calls between systems. This allows for certain employees to learn the new VoIP system and understand its features before migrating over service.

When should a VoIP system be analyzed and with what tools?

We have recently implemented a VoIP network with separate VLANs and QoS. It all seemed to be working fine when it first went in, but recently, certain people have been complaining about sound breakup whilst talking to customers on the phone. I have also had similar problems, but thought it was due to the amount of diagnostics software that I was running on my PC.

To check, I moved my phone into its own port and the breakup is still there. Any ideas how we can check to make sure that the network is doing alright? Also are there any software utilities that would help us with day to day analyzing?

First and foremost — I would suggest that you have someone come and test your cabling channels. That will be the least expensive and could be the most worrisome component. Even if the channels tested fine when first installed, they can degrade over time with moves, adds and changes.

The other thing you didn’t mention was if this occurs only on the intra-office calls or only on outside calls. If it is only on outside calls, you may want to get your carrier to check your circuits.

If these things test out okay, then you will want a RMON tool that can track performance. Check your switch SNMP data for errors. These will also give you a good idea of what the culprit may be. If this is happening to everyone in the building — start looking for common denominators such as network interface cards in the switch.

Thanks to TechTarget for the article.

5 strategies for post-holiday BYOD problems

Employees’ new mobile devices could cause the age-old security versus productivity debate to resurface

Christmas is fast approaching. Now, and after the office is back to normal after the first of the year, employees are going to return with several shiny new gadgets, along with the expectation that they’ll “just work” in the corporate environment. Security will be a distant afterthought, because it’s still viewed as a process that hinders productivity.

The back and forth between security helping or hurting productivity is a battle that has existed before the mobile device boom, and it will exist long after the next big technological thing arrives. But the fact remains security is an essential aspect to operations.

Analysts from Frost & Sullivan have estimated that mobile endpoint protection market will reach one billion dollars in earned revenue by 2017, a rather large number given that last year the market was worth about $430 million. The reason for the large projection is simple; mobile is the new endpoint, and everyone has one.

Laptops, tablets, and smartphones enable employees to work anywhere, at anytime, so organizations have had to adapt in order to protect them and the sensitive data they access. However, Frost & Sullivan believe that businesses severely underestimate the risk presented by mobile devices.

CSO recently spoke to Jonathan Dale, the Director of Marketing at Fiberlink, a mobile management and security firm recently acquired by IBM. He offered some suggestions for IT and Security teams that are gearing up to deal with the influx of new devices that’ll soon appear on the network.


It’s going to happen. According to the Consumer Electronics Association, tablets, laptops, and smartphones are the top gifts this holiday season for adults. Those gifts will show up on the network the moment that employees return from holiday break. So it makes sense to remind employees of corporate policies and rules that govern mobile device and their usage as it relates to work.

If the company has a mobile management product in place, Dale said, make sure to send employees enrollment instructions before they leave for the holidays and after they return.

“It doesn’t matter if it’s a new Kindle or one of the latest tablets from Samsung or Apple. The business side of getting a new device starts with enrollment. Make sure it’s clear that the link is for all new devices employees plan to use to access to corporate resources,” he said.

Do a policy check:

Now would be a good time to ensure that personal device usage policies, as well as policies governing devices issued by the business, are not only current, but also meet the organization’s security needs.

“Are you protecting the important stuff properly? Are your passcode policies applied properly? Are you forcing encryption on Android devices that support it and blocking the ones that dont? Ensure the policies are where they should be,” Dale said.

FAQ the basics:

Again, you can’t stop it: Personal devices will arrive after the holidays. Make things easier on the helpdesk, and when the policy reminders are sent, include the steps needed to enable Wi-Fi for iOS and Android, and basics like the SSID information or help on connecting automatically.

This, in theory, will cut down on the number of helpdesk requests related to making things “just work.”

Prep supported apps:

“What better way to welcome the arrival of a new device than with a supported list of apps. Once an employee enrolls a device, and IT can automatically push them all the corporate apps they need. To wow employees even further, place a set of games and public apps in their supported app store,” Dale said.


Finally, make sure that when the policy reminders go out, employees are clear on what parts of the device the company will have access to and what can be done with that access.

“Privacy is a major part of a successful BYOD program. There are several options so, know what abilities you as IT have and figure out what works best for your company culture or CEO,” Dale added.

Thanks to NetworkWorld for the article.

Command Your Data Center

How to Thrive In the Changing Landscape

The demands to virtualize, scale, and implement new applications while conducting security, forensics, compliance and performance monitoring activities are adding to the list of hurdles facing IT teams. These network visibility best practices provide insights into the solutions needed to manage and optimize network monitoring to solve many of these challenges.


As the network becomes critical to the success of an organization, network security and performance groups are challenged to gain greater insight into that network. Network administrators must enable access to network traffic for the monitoring tools used by these teams. IT trends such as increased reliance on SaaS applications, BYOD and the transition to 10/40/100G are also increasing complexity and vendor diversity within the data center. Meeting these challenges calls for an increasingly broad set of monitoring tools, which frequently require visibility into specific network segments or types of traffic. For these tools, 100% visibility of network traffic is vital to effeceffectively securing and monitoring the network.


Net Optics access products, including Network Taps and Bypass Switches, provide passive and fail-safe access for tools deployed in either inline (IPS) or out of-band (IDS) configurations. Utilizing Network Taps, Aggregation and Regeneration Taps, Bypass Switches, and Virtualization Taps, network admins are able to evolve beyond zero or limited SPAN visibility. 100% network visibility allows teams to analyze the specific traffic of interest they require in order to monitor and secure the network.


When network monitoring solutions are deployed as isolated point solutions or configured to receive non-optimized traffic, they are susceptible to degradation in their efficiency and effectiveness. Increasing network speeds and application diversity also creates new hurdles. Network administrators are faced with the challenge of ensuring that their network monitoring infrastructure is manageable, comprehensive and optimized to perform under these diverse loads without affecting network performance.


Net Optics Total Visibility Solutions provide a layer of control as to which tool receives specific traffic. Capabilities such as flow-mapping, deduplication, aggregation, filtering and load balancing optimize network traffic before it reaches a monitoring tool. The benefits of adding this Visibility Layer to your deployment include: manageability, reduced overhead, increased utilization and better performance from your entire set of network monitoring tools. High Availability (HA) configurations are also possible for your monitoring deployment, a major benefit for networks under pressure to deliver always-on performance.


Data Centers are on the path to either converged or full virtualization. However, many monitoring tools designed for traffic flowing over the physical network don’t have the ability to inspect traffic between two Virtual Machines. Not only does this situation leave security administrators blind to possible malicious activity within this growing segment of the network, but achieving an integrated approach to total network visibility becomes next to impossible. Achieving visibility into your virtualized traffic that is comparable to that of your physical network requires extensive redeployment—or the purchase and implementation of an entire new set of virtualization-specific tools.


Net Optics Phantom Virtualization Tap™ bridges the physical and virtual, so that you can monitor the virtualized network with your existing set of tools. Phantom is capabable of capturing and then sending inter-VM traffic of interest to the tools that are already monitoring your physical network.

The landmark Phantom Virtualization Tap supports all best-of-breed hypervisors. It works not only in ESX environments (“VMsafe Certified”) and with internal VMware vSwitches, but also with the Cisco Systems Nexus 1000V virtual switch; MS Hyper-V 2012, Xen, Oracle VM and KVM hypervisors. Simple to deploy and engineered for the virtual environment, the Phantom Tap extends the visibility of your monitoring tools into the blind spots created by virtualization.


Today’s network administrators face the challenge of meeting increasingly stringent SLAs that call for increased reliability and uptime. To quickly identify existing or potential issues that might affect uptime, the network team requires monitoring tools that provide a comprehensive view of data center performance—including every packet traversing a host and all inter-vm traffic. Monitoring to ensure peak network performance is key to consistent application delivery and a quality end-user experience.


Quick and easy to install and configure, this sophisticated yet simple solution offers your data center the ability to discover, diagnose and resolve problems before they can damage your core business. With practically no learning curve, the Spyke™ Application-Aware Network Performance Monitoring (AA-NPM) solution reduces operations costs even as it cuts time spent on problem identification and resolution.

Spyke uses DPI technology and root cause analysis to let users drill down instantly from high-level metrics to granular detail of every application and function, plus track bandwidth usage. You can identify actual user names and individual VoIP calls, and gain deep transparency into email traffic—all at a glance. This vital information can lower your MTTR substantially. Spyke does it all through a “single pane of glass” interface for ultimate convenience and control.


As they add virtualized infrastructures, organizations must also build in management layers to protect the data traversing those networks. For many, the effort to unify and centralize the management of monitored traffic becomes a nightmare.


Net Optics Indigo Pro™ is a unified management platform that enables centralized monitoring and configuration of few or many Net Optics devices, including network controller switches, Network Packet Brokers (NPBs), physical and virtual network taps and third party devices. From a single management console, Indigo Pro provides device configuration and element management, event and fault management, bulk upgrades of device software, an integrated device view, and rich graphical visualization of network statistics.

Using Indigo Pro together with Net Optics taps, controller switches and NPBs simplifies administration complexity associated with configuring and upgrading each device separately. This capability helps organizations attain a higher ROI gained from overall time and cost savings.

Automatic Discovery

Indigo Pro automatically identifies supported Net Optics and third-party devices throughout the network and quickly adapts to any device added, removed or taken offline. A dynamic topology map displays the devices and provides detailed device status and configuration information. This allows for easy deployment and immediate access to managed devices.

Device Configuration Management

Indigo Pro provides many configuration options, including filter settings, port management, user authentication, software updates, event management and graphical display of network activity. These enable complete visibility and control over the data flowing in and out of supported Net Optics devices and optimize administration and maintenance.


Network security demands Defense in Depth, an approach that keeps the network ahead of proliferating threats. Defense in Depth calls for multiple security systems working together and delivering instantaneous feedback for conducting forensics. Defense in Depth strategies combine, cascade and join multiple security solutions to work in concert transparently. Each component of this solution addresses specific risk factors and attack vectors. The next evolutionary step in Defense in Depth strategy will address the need for various security layers to respond dynamically to a detected threat. They can then reorganize or re-deploy in the ideal configuration for eliminating or minimizing that threat.


Security-Centric SDN: A Scalable, Cost-Effective Security Architecture Net Optics Security-Centric SDN enables the scaling of existing security and other monitoring tools without a costly overhaul. An organization can now achieve total network visibility and protection across the entire breadth and depth of physical, virtual, and private cloud environments.

This new approach separates network elements from security and monitoring devices; it also enables automation and provisioning of monitoring applications and tools based on real-time traffic behavior. Security-Centric SDN provides end-to-end network monitoring and improves security, along with simplifying operation.

Security-Centric SDN marries an SDN controller with NPBs and a customer’s chosen security tools. NPBs, with their ability to “chain” solutions, integrate multiple systems, and distribute traffic, provide the ideal means for provisioning a dynamic response. Such chaining of security solutions supports and enables Defense in Depth. It embodies dy namic attack monitoring; the use of NPBs for traffic distribution; and use of the network controller for assessing the network, provisioning SDN, and reacting to network activity.


Net Optics delivers scalable, end-to-end visibility solutions to achieve peak performance and optimization of your physical, virtual, prviate cloud, and branch office monitoring deployments.

Total application and monitoring visibility lets you overcome threats, prevent data loss and deny unauthorized use. Net Optics’ plug-and play AA-NPM, NPB, Virtual/Cloud and Visibility Management System solutions deliver quick results and time-to-value with a convenient, easy-to-use interface.

As your user base and data volumes grow, our compact and scalable solutions keep your network monitoring deployments cost-efficient and productive.

Want to learn more? Download the Net Optics Command Your Data Center white paper here.

4 Network Problem Timesavers

Network Instruments Four Network Problem Timesavers

When tracking down the source of network problems, where do you even begin? Network Instruments University™ instructor, Mike Motta shows you how to get to the root of network-layer issues with the top 4 timesaving tips.

1. Investigate double-sided captures for delay

Take packet captures from each side of the network conversation and use MultiHop Analysis within Observer® Expert to investigate whether any of the segments are the source of delay. Learn how to set up MultiHop Analysis.

2. Calculate TTL (time-to-live) value to know how many hops

When troubleshooting network delay between remote offices, it’s important to know where the delay occurs. If, for example, you know that packets take 13 hops on average from a remote office to headquarters, and now it’s taking 20 hops, this would point to the source of the delay.

A simple formula to determine the number of hops over each route would be to calculate the difference between the TTL values from the source to the destination. By determining the number of hops, you can see if any are causing fragmentation.

Observer Infrastructure route maps automates this process by tracking routes and determining if the number of hops has increased at any point.

3. Configure filters to look for fragmentation fields in the header with More Fragments or Don’t Fragment bits set

Fragmentation issues cause packets to be unnecessarily chopped into multiple packets, thus increasing workload and delay. Packets with the More Fragment bit set may indicate that a router along the path is fragmenting frames. While packets having the Don’t Fragment bit set most likely indicate an application problem.

4. Build filters to search ICMP messages

If a router throws a packet away with the Don’t Fragment bit set, it will notify the sender via ICMP (Internet Control Message Protocol). The message can determine the exact nature and source of the problem. ICMP messages (other than pings) indicate issues with the subnet, mask, routing, default gateway, or QoS.

ICMP Message Likely Issue
Redirect for Network Default gateway incorrectly configured
Redirect for Hosts Subnet mask incorrectly configured
Port Unreachable Application port not started or responding
Host Unreachable Have route but box not answering ARP request
Protocol Unreachable ARP request answered but box not answering specific protocol request
Network Unreachable Router does not have route to reach network

If the network layer is determined to be error free, the next step will be to analyze application delivery and performance. To review Observer’s MultiHop Analysis and advanced application analysis capabilities, read through the Network Application Performance white paper. You can also sharpen your network and application troubleshooting skills by signing up for one of our Network Instruments University classes.

Thanks to Network Instruments for the article.

Canada announces cap on mobile roaming charges

Reuters reports that Canada is ushering in new rules to cap mobile roaming rates in the country, as part of an effort to spur competition in the local market. The country’s minister for industry James Moore says that the change in legislation will level the playing field by stopping bigger operators from charging their smaller rivals more than they charge their own customers for roaming voice, data and SMS/MMS services. The minister claims that some cellcos are guilty of charging as much as ten times more to rivals than they charge their own users. In a statement to announce the government’s plans to submit an amendment to the Telecommunications Act in the next few weeks, Mr Moore said: ‘For too long, Canadian consumers in the wireless sector have been the victims of these high roaming costs’. Reuters notes that the governing Conservatives hold a majority in the House of Commons, assuring passage of the bill which will limit the big three cellcos – Rogers Communications, BCE and Telus Corp – to being able to charge their competitors no more than the rate they charge their own retail customers. As such, the Canadian authorities are hoping to encourage smaller carriers to reduce their end-user prices and improve service coverage outside their usual coverage zones (typically restricted to major urban areas). The government also plans to give the country’s telecoms regulator and the industry department powers to fine companies that break rules on such things as the sharing of cellular towers.

Thanks to TeleGeography for the article.

Wind submits bid for Mobilicity, half the value of Telus bid, source says

Canadian cellco Wind Mobile has submitted a bid for the assets of smaller rival Mobilicity under the latter’s court-sanctioned auction process, valued at around half of the CAD380 million (USD357 million) takeover offer from nationwide incumbent Telus which was blocked earlier this year, according to a person with knowledge of the matter quoted by Bloomberg, who asked not to be identified because the process remains private.

Thanks to TeleGeography for the article.

StableNet® Enterprise ONE Single Application – ONE Single Database

ONE Single platform for IT Service Assurance

Infosim StableNet Performance Management 2 Infosim StableNet Configuration Management 2 Infosim StableNet Fault Management 2
  • Network Monitoring
    • Devices
    • Servers
    • Applications
    • Business Processes
  • VoIP Monitoring
  • SNMP v3
  • NetFlow v9, sFlow, IPFix
  • Real-Time/Trend/Historic
  • IPv4, IPv6
  • Auto Device Discovery
  • Auto Network Discovery
  • Inventory & Graphical
  • Topology
  • Group network devices to
  • fit business needs
  • Audit trail of configuration
  • changes
  • Automate the back up
  • process and security checks
  • Thresholds
  • Error Management
  • Event Management
  • Event Correlation
  • Root-Cause Analysis
  • eMail & SMS
  • Traps & Syslog

Infosim StableNet Reporting  SLAs 2

On-Demand Reporting: (Analyzer, Group Statistics, Inventory Reports, Event Reports) Reporting Engine: ( Graphical Report Editor, Servlets, eMail,PDF, HTML)

The StableNet® Difference


Other Solutions

One application, one interface. End to End view of all resources. Constant “tool hopping”.
Integrated data across Performance, Configuration and Fault Management. No data silos. Different vendors, or disparate modules from the same vendor. High cost and complexity.
Automated processes, network discovery and configuration. Burdensome manual intervention, tasks and costs.
Event Correlation means much faster Root Cause Analysis and MTTR. Data Silos mean manual process to correlate data from multiple sources.
Training for one product. Constant product training.
One maintenance, upgrade and support cycle. Multiple contracts, multiple versions across products. The contracts add up to how much?!!

5 things to know about Infosim and StableNet®

  1. Infosim’s StableNet® software is used by 100’s of companies around the world.
  2. StableNet® manages between 500,000 and 1 Million devices in networks everyday.
  3. ROI in less than 6 months. One client replaced 4 major applications with just one- StableNet® to manage their network of over 20,000 devices.
  4. Proven scalability managing 9000 locations/remotes for one Enterprise customer network.
  5. StableNet® automation capabilities helped one major client gain back over 30% IT staff productivity.

Thanks to Infosim for the article.

If You Think UCC Integration is Easy, Think Again-Implementing a Solution Takes Considerable Planning

One of the more popular developments happening in IT departments worldwide is the growth of unified communications and collaboration (UCC) solutions. The cost-savings from integrating the many disparate communications software and hardware pieces typically found in a large organization is too compelling to ignore.

But there is a danger in underestimating the effort needed to roll out a UCC implementation that runs smoothly and efficiently. One could watch Michael Jordan and marvel at how he made playing hoops look so easy without appreciating the thousands of hours of hard work and practice he performed. In a similar way, IT managers run the risk of rushing into deploying UCC solutions improperly if they don’t thoroughly prepare for it.

According to an InformationWeek article by Dan Ferguson, three of the most important tasks an IT department should do when deploying a UCC solution are getting a well-defined scope statement, providing a proof of concept system and examining the possibility of cloud-based UCC deployment.

The scope statement makes it clear ahead of time what work a vendor will and won’t do in developing a UCC solution before a company spends hundreds of thousands of dollars on it. A proof of concept system serves as an insanity check that the proposed solution will work at least on a small scale before implementing it across the whole organization. As with any solution, cloud-based UCC is something any organization should consider, given its cost savings and scalability.

Another important decision IT departments have to make is whether or not the solution consists of purchasing a new UCC product or buying products in piecemeal fashion that tie existing communications together. Cisco offers a UCC solution that falls into the former category. This approach will appeal to many IT managers, likely in SME setups, that don’t want to deal with complex integration headaches. For massive enterprises with tens of thousands of employees, gutting communication infrastructure and replacing it with a one-stop UCC product may be less attractive.

UCC is going to continue to be an important part of any enterprise that wants to simplify the various communications under its roof. While it can lead to cost and time savings, without proper planning, an organization may end up worse off with UCC than without it.

Thanks to Unifed Communications for the article.

Mobilicity auction process, creditor protection extended to February, report says

Indebted Canadian cellco Mobilicity has effectively extended its court-sanctioned creditor protection period to 18 February 2014 whilst it continues an auction process for its entire business or selected assets, MobileSyrup reported yesterday, citing a court document submitted by the company’s appointed representative Ernst & Young. Interested bidders were required to submit their intentions by the middle of yesterday, with rival Wind Mobile the only company to have publicly expressed interest to date.

Thanks to TeleGeography for the article.

Fortifying Network Security with a Defense in Depth Strategy

Net Optics No Breach Zone

To battle today’s sophisticated threats, organizations need the ability to deliver Defense In Depth—a layering of strategies and solutions that collectively protect against malicious attacks. Security-Centric SDN (Software Defined Networking), using multiple inline products, is the most effective approach to delivering that Defense In Depth protection. Point security solutions, although growing in effectiveness, are incapable of consistently, reliably thwarting intrusion and preventing the compromise of network security. That failure lays the network open to attack, with consequences spanning public endangerment, loss of personal and corporate assets, disruption of the social contract and deteriorating public confidence.

Regulators worldwide are mandating advanced protections and procedures to raise and tighten security levels. However, the network has grown to encompass social networking, remote access, and cloud computing. The resultant labyrinth of industry, national and local laws, added to the fact that many infractions result from third-party activities, makes compliance exceedingly complex. A defense strategy must take these and many more factors into account—performing effective monitoring and access control of the network without stifling the Internet’s freedom of innovation and communication.

Implementing Inline Security-Centric SDN in Your Organization

The Net Optics xStream™ Platform simplifies SDN integration, fortifies security and streamlines management. As an inline resource, this hardware and software platform offers scalability, high performance and seamless availability. The xStream Platform resides on a newly designed chassis with 24 ports or the equivalent of 64 10G ports—in one rack unit—for exceptional network productivity with an ultra-low latency 480G backplane for high visibility and performance. The xStream Platform merges three state-of-the-art products for quick link aggregation, network packet brokering and load balancing from a single platform. A flexible, scalable approach enables aggregation, regeneration, switching, and filtering of high traffic volumes with Deep Packet Inspection capabilities and extremely high port density. An expanded menu of commands ease configuration and control dramatically as well as delivering High Availability (HA) function to monitoring tools—a key benefit for networks under pressure for always-on performance

The xStream Platform includes:

  • xStream 40™ —a load balancing appliance for monitoring high-speed network traffic and easing migration to 40G. xStream 40 provides extensive NPB capabilities for 40G networks, including advanced filtering, aggregation, load- balancing, and time stamping. It offers a convenient way to perform 40G monitoring with existing tools and protects investment value while maintaining security and performance, boosting productivity and simplifying large-scale network management and network monitoring.
  • xBalancer™—a purpose-built solution for distributing traffic to multiple monitoring tools, sharing the load caused by high traffic volumes, offering linear scalability and preserving the value of your tool investment.
  • Director xStream™—a data monitoring switch that aggregates, regenerates, switches, filters, and load balances monitoring traffic. With the highest density of 10G ports in the monitoring industry, Director xStream empowers the NOC to share a pool of monitoring tools across a large number of network links.
  • iLink Agg xStream™—a high-performance link aggregator that combines traffic from as many as 20 network links or Span ports and sends it to four monitoring tools.iLink Agg xStream speeds 10G traffic to 1G appliances, and 1G traffic to10G tools. iLink Agg xStream automatically performs all data-rate and media-type conversions, enabling 10G traffic to be sent to1G appliances, and 1G traffic to 10G tools.

“Security is a Process, not a Product”

As organizations work to handle cloud challenges, consumerization, BYOD programs, mobility and virtualization, their greatest and most profound concern across all of these areas lies in thwarting cyber attacks. These have the potential to nullify all the progress and growth a company has worked to attain; steal their vital data and compromise their business goals. The current risk landscape demands an inline Security-Centric approach based on a Defense In Depth security model.

Read more……..