MTS Allstream launches LTE network

Canada’s Manitoba Telecom Services (MTS Allstream) has announced the commercial launch of its 4G Long Term Evolution (LTE) network, initially covering the Manitoban cities of Winnipeg and Brandon with mobile download speeds of up to 75Mbps. Claiming the first launch of 4G LTE in Manitoba, MTS said that it will expand the network across the province in 2013, augmenting its existing HSPA+ services. As previously reported by CommsUpdate, in June 2012 MTS selected Ericsson as an exclusive partner to supply it with LTE infrastructure. MTS’ press release indicates that some customers have had access to its 4G data speeds since the end of August.

Thanks to TeleGeography for this Article

Net Optics Flex Tap

Net Optics Flex Tap

Flex Tap

High Speed High Density High Flexibility

Download Net Optics Flex Tap Data Sheet

Net Optics FlexTap Data Sheet

“We wanted to bring customers a Network Tap solution with a spectrum of truly innovative benefits—high scalability, the versatility to mix-and-match speeds within a deployment, the highest visibility and compatibility, simplified upgrade and rack-mount ability.

Net Optics Delivers the Next Level of Flexibility in the Network Access TapsNet Optics Flex Tap Modules

With rack space at a premium in the data center, the compact Flex Tap’s ability to support 1G, 10G, 40G and 100G implementations represents a leap forward in value and cost-savings.
This versatile Tap makes full-duplex network monitoring simple and secure with custom cables that send each side of the network link to a separate NIC on your monitoring tool. The Flex Tap is compatible with all major manufacturers’ monitoring devices, including protocol analyzers, probes, and intrusion detection systems.

Northern exposure: Unicom unveils Canadian appendage

China Unicom has opened an office in Toronto as part of its North American expansion programme, to support the global network infrastructure requirements of its Canadian corporate telecoms service clients, according to Yitao Wu, president of the Chinese group’s Americas operational division. Unicom, which operates global network services covering more than 240 countries, views Canada as a key growth market alongside the US and the Asia-Pacific region.

Thanks to TeleGeography for this article

Optimus primed to let Rogers’ band play new tunes

Rogers Communications has announced that it will begin offering a 2600MHz-enabled Long Term Evolution (LTE) smartphone, a version of the LG Optimus G, from November 2012, which it says will be Canada’s first 2600MHz-capable LTE handset. John Boynton, Rogers’ chief marketing officer, declared that ‘by offering Canadians the first 2600MHz-enabled LTE smartphone, the LG Optimus G, we are allowing faster connections to download large files on the go, give sport fans every second of the game on their smartphone or let commuters maximise their travel time.’ However, the cellco did not initially clarify whether or not it will be offering 2600MHz LTE network capabilities alongside its existing 2100MHz 4G coverage, or whether the handset announcement was a pre-emptive move to pave the way for future multi-band services. TeleGeography’s GlobalComms Database says that Rogers holds legacy wireless broadband spectrum in the 2.5GHz-2.6GHz, 2.3GHz and 3.5GHz frequency ranges accumulated via its now-defunct ‘Inukshuk’ WiMAX joint venture with Bell Canada; the JV was dissolved in December 2011 with Rogers and Bell splitting spectrum equally, and Rogers switched off its fixed/nomadic WiMAX service in March 2012. The exact quantity of 2.5GHz-2.6GHz spectrum Rogers is entitled to will be decided via a licence auction to be held by H1 2014, or within a year of the 700MHz concession sale expected in H1 2013, GlobalComms adds.

Thanks to TeleGeography for this Article

Wind Mobile now has 500,000 Subscribers

Fair weather trend: Wind breezes past half million

Canadian 3G mobile network operator Wind Mobile – part of the Vimpelcom group – has announced that it has reached the milestone of 500,000 subscribers this month, up from 457,000 at end-June, and having commercially launched in December 2009. Wind also stated that its W-CDMA/HSPA network now covers a population of over 13.5 million via 1,225 active network sites. It also reported that it currently has 225 own-branded retail stores in addition to more than 400 other distribution outlets.

Thanks to TelGeography for this Artiicle

Defending the Network from Application Performance problems (part II)

In my prior blog post, I wrote about different network problems that negatively impact application performance. In this post, I’ll follow up with non-network problems that impact application performance, but for which the network provides a unique vantage point from where such problems can be identified and solved. In the next post, I’ll tie everything together by describing how to determine if the network is at fault and how to get the other organizations to understand more about application performance.

Slow Client

Many modern web-based applications often push a bunch of the user interaction work to the client workstation. Sometimes it is done in a way that pushes a lot of data to the workstation where some JavaScript code processes the data. I’ve seen applications that had long, multi-second pauses because the JavaScript process had to handle hundreds or thousands of rows of data before the client display could be updated.

A good Application Performance Management (APM) system identifies clients that have these types of delays. It requires looking at the client-to-server transactions and identifying when the client is paused due to internal processing. The analysis needs to differentiate between the client workstation application pauses and the “think time” of the human who is interacting with the application.

Slow Server

The server teams don’t like to hear it, but the most common causes of slow application performance are the applications or the servers themselves. I’ve found that it frequently is not the network that is the cause, even though the network often gets the blame.

Modern applications are typically deployed on a multi-tiered infrastructure. There often is a front-end web server that talks with an application server. The application server in turn talks with a middleware server that queries one or more database servers for the data it needs. These servers may all talk with DNS servers to look up IP addresses or to map IP addresses back to server names. All it takes is for any one of these servers to have performance problems and the whole application runs slow. Of course, the problem is then one of identifying the slow server out of the set of servers that implement an application.

Understanding the interactions between multiple components in an application is an essential part of understanding the root cause of performance problems. This process, called Application Dependency Mapping, is typically part of an integrated APM approach, and ideally leverages information from already in-place monitoring solutions to draw a dependency map between system components. The network provides a unique vantage point to derive these relationships, and as such the network team can provide strong value to the application and server teams.

Although we can collect a lot of very rich information from the network, using packet capture tools to answer the question of “Is it the network or the application?” could take many, many hours of work. All the while, the application is running slow, affecting the productivity of anyone using that application.

I’ve used Application Response Xpert to significantly reduce the time to identify why a slow application was slow. Once you have set up the proper monitoring points and some basic configurations, it is very easy to use  and provides immediate value for “the network is slow” fire drills. The information gathered by AppResponse Xpert also provides input to AppMapper Xpert, to automatically draw dependency maps of critical applications.

Identifying Database Scaling Problems

A common cause of application slowness is that the application was developed with a small data set on a fast LAN development environment. Then the application is rolled out to production. It may initially run with acceptable performance. But over time, as the database grows, it becomes slower and slower. A quick analysis with AppResponse Xpert shows that one of the key middleware servers is making a lot of requests to a database server. One client request can result in many database requests or perhaps result in the transfer of a significant volume of data. Changing the database query to be more efficient typically solves the problem.

I’ve also found the case where a database server takes many seconds to return data to the middleware or application server. The application team can use AppResponse Xpert’s Database Monitoring module to identify the offending query. Sometimes a good development team can look at the user transaction and quickly determine what queries are likely to be the culprit while other times, the application is making so many database queries that a SQL query analysis tool is really what is needed. In the cases I’ve seen, the queries were poorly structured, sometimes joining large tables that resulted in extremely long query times on production data sets. Simply rewriting the queries dropped the query times by several orders of magnitude. This is where these tools pay off. The advantage using deep packet inspection on the network to identify problems with SQL queries is that there is no overhead added to the database.  This is another example of how the network team can provide value to other IT teams.

Chatty Conversation

Another typical example of problems within the application is the chatty conversation. One application server, or perhaps the client itself, will make many, many small requests to execute one transaction on behalf of the person running the application. It runs fine as long as the network latency between the client and server is low. However, with the advent of virtualization, the server team may have configured automatic migration of the server image to a lightly loaded host. This might move a server image to a location that puts it several milliseconds further away from other servers or from its disk storage system. A few milliseconds may not be much unless the application does hundreds or perhaps thousands of small requests to complete one transaction.  Suddenly, the application goes from an acceptable level of performance to unacceptable performance. Of course, database size also affects the performance because the number of small requests goes up with the database size.

You need visibility into the number of requests between systems, where the systems are connected to the network, and the delays between requests. Getting a baseline of system performance against which you can measure future performance is extremely useful for identifying whether a given application is performing as expected and possibly identifying which server needs to be examined.

This kind of examination can be automated by AppTransaction Xpert, which can capture baseline transactions from the packet store of AppResponse Xpert and predict the change in their response times given different network parameters such as latency, bandwidth, and loss rate.

Slow Network Services

Finally, the problem may be due to slow network services. This isn’t the network itself, but services that most network-based applications depend upon for proper operation. Consider an application that makes queries to a DNS server, but the primary DNS doesn’t exist, so the app must time out the first request before attempting to query the second DNS server. I’ve seen applications that would have a 30-60 second delay upon the first execution, but would then run fine for a while. Periodically, the application would be very slow, but run fine the rest of the time. Intermittent problems are very challenging to diagnose, so this is where having something like AppResponse Xpert watching and recording all the transactions is extremely helpful. Just identify the time of the slow performance and look for something in the data. In this case, it would be an unanswered DNS request, which was successful when tried against the secondary DNS server.


Accurately diagnosing application performance can be impossible or very time consuming with the wrong tools. With the right tools and a good installation, where the tools capture the necessary data, the analysis and diagnosis can proceed very quickly. In addition, these tools not only help to defend or troubleshoot the network, but also provide value to other IT teams in the organization. I know of one site that went from not being able to help diagnose slow applications, to being able to provide deep visibility into what an application is doing from the network perspective, and providing real value to the application teams to solve the problem.

We thank Terry Slattery and OPNET for this Article

Ice Wireless rolls out 3G Cellular in Canada NorthWest Territories

Thaw point: fresh trio unwrap Ice PoPs, hope warm reception to Arctic rollout melts digital divide

Ice Wireless – an independent far-north Canadian cellco – and its associate Iristel – a Canadian competitive local exchange carrier (CLEC) – have entered into a ‘multi-million’ dollar contract with China’s Huawei Technologies to roll out a 3G cellular network for rural and remote communities in Canada’s Northwest Territories, Yukon and Nunavut by the end of 2013. Presently, residents of the three territories of Northern Canada have the lowest availability and highest costs for telecommunications services in the country. According to a recent report by the Canadian Radio-Television and Telecommunications Commission (CRTC), only 66% of the North has access to wireless internet and just 48% has access to 3G cellular services – compared to 99% in the rest of the country. The network upgrade and expansion by Ice Wireless-Iristel, using Huawei networking equipment, will improve 3G and wireless broadband services for 60,659 people across the three territories, and provide access to services that were previously unavailable to local residents.
Ice Wireless-Iristel will also provide Northern residents with fixed line services, including home telephony through Iristel’s Canada-wide voice-over-internet protocol (VoIP) CLEC service, enabling local and long-distance calling. ‘We are excited to be part of the government’s mission to bridge the digital divide for the Northwest Territories,’ said Samer Bishay, CEO of both Ice Wireless and Iristel, adding, ‘We aim to ensure rural and remote households, businesses and community organisations in the North have access to prices and services that are fast, affordable, and every bit as good as those enjoyed by the rest of Canada, if not better.’ Established in 2005, Ice Wireless currently provides voice, video and data services, including GSM cellular, to Yellowknife, Inuvik, Hay River, Aklavik, Behchoko and Whitehorse, and operates a network covering 70% of Northwest Territories and 78% of Yukon.

Thanks to TeleGeography for this Article

Rogers Adds 10 New Cities for LTE

Perfect ten: figures shaping up to fit Rogers’ Long Term model curve

Canada’s Rogers Communications has announced that it is launching 4G Long Term Evolution (LTE) mobile broadband services in ten additional cities, as it drives towards an end of year target of 60% 4G population coverage. The new locations are: Barrie, Burlington, Cambridge, Kingston, Kitchener, London, Oakville and Waterloo (all in Ontario province), Edmonton (Alberta) and Quebec City. The expansion adds to Rogers’ existing LTE coverage of approximately 40% of the population including areas in and around Montreal, Toronto, Ottawa, Halifax, St. Johns, Moncton, Calgary and Vancouver.

Thanks to TeleGeography for this Article

Customer Experience Strategies Summit

Learn More About Virtual Customer 101 at…


Introducing Virtual Customer 101

This new process will help you understand how to create a vast army of Virtual Customers who will infiltrate your customer service infrastructure at every level and gather vital intel you can use to capture the hearts of your customers. And once you have their hearts, their wallets surrender on their own.


IQ Services IQ Services IQ Services - Application Feature Testing IQ Services

IQ Services Introduces Virtual Customer 101

Meet Virtual Customer 101™

A process that measures your technology-focused Customer Experience.

Continuing their innovative approach to communications and contact center evaluation, IQ Services introduces VC101™. VC101 deploys a Virtual Customer™ to test and evaluate contact center technology based upon one’s desired Customer Service Experience performance.

To implement the Virtual Customer we initially meet with the client’s internal team to understand initial guidelines for the development of their Virtual Customer. From this initial meeting, we develop a proposal to take each client through the CIA™ process — Communication Intelligence Assessment™ (CIA). The CIA meeting allows us to ask probing questions to ensure the VC101 process addresses critical issues for each company.  Once the Virtual Customer is created, they are then programmed to provide an unbiased and objective report on the real experience customers are having with the client’s contact center. VC101 allows companies to address problems quickly and determine whether or not their brand promise is being delivered at every level.

Jim Jenkins, IQ Services CEO, states,”When you rely upon technology to deliver great customer service experience,VC101 makes sure the technology is supporting your brand promise.”For more on VC101, click here

Our mission is to give companies the confidence that their contact center andcommunication solution investments deliver customer satisfaction and cost-saving returns.

“We are thrilled to have introduced a new way for companies to prevent problems, address issuesearly, and maintain ROI,” says Russ Zilles, President & COO of IQ Services.