Avoid Lost Sleep Over BYOD

Network Instruments - Avoid Lost Sleep over BYOD

Avoid Lost Sleep

Over BYOD

Upon waking up in the morning and pressing the snooze button five times, what’s next in your morning ritual? Since the alarm is probably on your smartphone, you glance to see if there are any network fires you’ll be facing at the office. And once you’re at work, you likely continue to check for smartphone messages throughout the day – a routine shared by thousands of your colleagues around the world.

This routine is part of the booming Bring Your Own Device (BYOD) trend that will impact your job as an IT professional for years to come. While ignorance can be bliss, when users increasingly access work services from mobile devices, network teams need to act today to ensure trouble free interactions with these applications from any device. In this article we’ll cover five essential performance monitoring areas so you can rest easy knowing your network can handle BYOD challenges.

New Security Nightmares Security challenges are often the first things IT professionals think of with BYOD: lost data, stolen devices, and increased malware exposure. Look into applying server-based computing (Citrix® or Windows® Terminal Service), virtual desktops, security, and DLP agents for Windows, Droid, and OS X devices. Ultimately, secure the app and data; don’t rely on users
IPv4 Exhaustion Added devices demanding IPs at the office may quickly exhaust your locally available supply of IPv4 addresses. The switch to IPv6 may be sooner than you think. It’s time to begin IPv6 testing and transition planning now.
Expanding Troubleshooting Puzzle With BYOD, the number of environments you’re troubleshooting has expanded. You may now be resolving user problems that involve accessing business apps over mobile networks or a coffee shop’s WiFi network. The solution? Implement monitoring solutions that can mimic user transactions through active polling. Incorporate multi-segment analysis to pinpoint where performance degradations are occurring.
Overloaded WiFi Systems As users bring phones, tablets, and netbooks to work, wireless bandwidth will be in high demand. Prevent your inbox from being flooded by user complaints through close monitoring and baselining of wireless bandwidth and app response times. In addition, ensure access point location and signal strength are optimized where people are likely to use mobile devices.
Overwhelming Bandwidth Demand Added personal devices can result in more network service demands. Regularly review historical consumption and trends. Verify bandwidth demand is from legitimate activity and not streaming video or smartphone Skype™ sessions. If this is an issue, prioritizing traffic and developing user access policies are essential, especially if bandwidth-intense applications are being accessed over WiFi.

Addressing the BYOD challenge through comprehensive monitoring, optimized security, and proactively responding to long-term bandwidth growth will have your network running smoothly – regardless of massive requests from mobile devices​​​

Advertisements

Bell Aliant’s revenues fall marginally; exceeds 100,000 IPTV users

Atlantic Canadian telco Bell Aliant has reported a 21% year-on-year rise in net income to CAD92 million (USD91.9 million) for the third quarter of 2012, on revenues which fell by 0.4% to CAD697 million, as growth in IPTV, broadband internet, data and wireless turnover largely offset declines in local and long-distance telephony revenue. ‘FibreOP’ fibre-to-the-home (FTTH) broadband customers grew by 16,500 to 92,000 in the three months ended 30 September 2012, including migrations from DSL/fibre-to-the-node (FTTN), while total quarterly net broadband additions were 7,500, bringing the total high speed user base to 913,600, up 2.4% from a year earlier. IPTV customers increased by a net 12,300 in three months to 107,400 at the end of September 2012, as FibreOP (FTTH) IPTV customers grew by 14,200 to reach 79,500, a portion of which were migrations from Aliant’s FTTN TV service.

Thanks to TeleGeography for this article

Shaw’s EBITDA climbs 4.2% in fiscal 4Q

Western Canadian cableco Shaw Communications has reported that net profit in its fiscal fourth quarter ended 31 August 2012 dropped by 20.4% year-on-year to CAD133 million (USD133.4 million), compared to CAD167 million in the same period of 2011, as competition from cable and wireline rivals intensified. However, the triple-play operator’s quarterly EBITDA rose by 4.2% to CAD501 million, helped by the group’s satellite and media businesses outperforming its cable division. Cable-only revenue increased by 2% to CAD803 million, mainly due to Shaw charging higher rates, while consolidated revenue climbed 3% to CAD1.21 billion; cable operating income was virtually flat year-on-year at CAD396 million. Total basic cable subscribers declined by 16,000 in a year to 2.219 million customers at the end of August 2012, while cable broadband subscribers increased by 35,000 year-on-year to 1.912 million, and fixed telephony users grew by 130,000 to 1.364 million

Thanks to TeleGeography for this Article

Rogers revenues up 2%, says it could look at selected Astral acquisitions

Canadian quadruple-play operator Rogers Communications has reported that consolidated revenues climbed 2% year-on-year to CAD3.18 billion (USD3.21 billion) in the three months ended 30 September 2012, while operating income (EBIT) increased by 5% to CAD1.29 billion and net profit rose 1% to CAD495 million. Respective year-on-year turnover increases of 2%, 16% and 1% in mobile services, mobile equipment sales and cable network operations offset revenue declines at Rogers’ business services and media divisions.

Rogers activated 707,000 smartphones in the third quarter of 2012, up from 609,000 in 3Q11, increasing the percentage of subscribers with smartphones to 65% of its total post-paid base. For the three months and nine months ended September 2012, wireless data revenue increased by 18% and 16%, respectively, from the corresponding period of 2011 to CAD719 million and CAD1.995 billion, reflecting the continued penetration and growing usage of smartphones, tablet devices and wireless laptops, as well as increased data roaming. For the three- and nine-month periods wireless data revenue represented 41% and 40%, respectively, of total network revenue, compared to 36% and 35% in the corresponding periods of 2011. The wireless data component of blended ARPU in 3Q12 increased by 15.4%, which was partially offset by an 8.3% decline in the wireless voice component as a result of the heightened level of competitive intensity in Canada’s mobile voice market. Meanwhile, Rogers expanded its 4G LTE mobile broadband network to 24 cities including Victoria, Edmonton, Regina and Quebec City, reaching 54% of the Canadian population.

Rogers added 29,000 net additional cable broadband subscribers in Q3 to reach a total of 1.84 million, while it lost a net 10,000 cable TV customers in the quarter, for a total of 2.24 million.

Elsewhere, Rogers has this week indicated that it would consider bidding for selected assets of broadcasting group Astral Media if rival telecoms heavyweight BCE (Bell Canada) fails to overturn a decision from the Canadian Radio-television and Telecommunications Commission (CRTC) blocking a CAD3.4 billion takeover attempt. BCE is seeking help from the federal government to overrule the CRTC’s decision and allow the purchase of Astral, but initial signs are that such an outcome is unlikely.

Thanks to TeleGeography for this Article

Turbocharge Network Assessments with APM

NetCraftsmen does a lot of network assessments. Customers want us to tell them why their “network is slow”. Once again, there’s the assumption that there’s something wrong with the network. What they really want to know is why their applications are slow and what they can do about it. That’s where Application Performance Management (APM) products provide tangible benefits. APM technology has many forms. Some systems add instrumentation options to client systems, browsers, or in the call stacks of production servers. Other systems passively capture and analyze network traffic. My focus in this article is on APM that passively watches applications on the network. We like to think of it as turbocharging the assessment because we can produce exceptional results quickly and gain views into the network that we can’t obtain with other tools.

On a number of cases where there are reports of the network being slow, I’ve found instead that it wasn’t actually the network that was at fault. It was something else in the environment that caused application slowness. Early in my consulting career, it was a backup that was running during the day because the backup set had grown to the point that it couldn’t finish in the allotted time. The network utilization caused by the backup created congestion, which in turn caused packet loss. Small amounts of packet loss will cause large degradations in TCP performance, as shown in my blog on TCP Performance and the Mathis Equation. In a case this past year, we found congested Inter-Data Center links were causing significant application slowness. Using APM on this case provided valuable insight and allowed us to quickly identify the source of the packet loss.

Focusing on the Network

A network assessment would normally be focused on network design and implementation, perhaps with some analysis of network operations. But that approach ignores the real reason for most network assessments – slow application performance.

In almost every network, it is easy to find areas in which to make improvements. Occasionally, we’ll have an engagement where someone wants us to check the network against industry best practices. But most of the time, we’re looking at a network with problems.

There are almost always problems with spanning tree, duplex mismatch, and topology designs. We know how to look for one of these problems, spot it quickly, and determine the extent to which the problem exists. What is missing from the basic network assessment is the evaluation of the applications themselves. That’s where APM provides valuable insights.

APM: The Assessment Booster

We like to use APM to validate what we’re seeing on the network side. Frequently, it provides visibility into problems that we didn’t otherwise know existed. In a recent case, we were working on a network assessment and had identified several minor network improvements that could be made. But there wasn’t anything that jumped out as a significant contributor to poor application performance, which was why we had been contracted to do the assessment.

We deployed OPNET’s AppResponse Xpert appliance in the data center where we could look at traffic for most of the key applications. We quickly identified that the network was indeed causing communications problems. Within a day, we knew that there were very high volumes of TCP retransmissions in the applications. A little more investigation allowed us to determine the source of the retransmissions. We did SNMP polling to find a set of inter-data center interfaces that had somewhat strong peaks of discards during the times when we observed high TCP retransmissions. But the number of discards didn’t look too much out of place, considering the volume of data transiting the 1G interfaces. However, the APM analysis showed that some applications were experiencing 0.08% retransmissions. Based on our work with the Mathis Equation (see the link above), we knew that something was causing TCP retransmit timers to trigger. Either packet loss or very high jitter existed. Armed with that knowledge, we started checking the path in detail. For a description of the analysis, see my blog Application Analysis Using TCP Retransmissions, Part 1. We found that the 1G inter-data center links had been configured with extremely large buffers – enough buffering to extend the normally 2ms RTT to 14ms. So even though the interface stats didn’t look too bad, TCP was timing out some packets and retransmitting them. The excessive buffering was circumventing the TCP flow control mechanisms and congestive collapse occurred when the load exceeded the link capacity.

Upon further analysis, we also found duplicate TCP ACKs, which indicates that duplicate packets arrived at the destination. This is another indication that TCP timed out and retransmitted packets. The retransmitted packets then consume additional bandwidth, exacerbating the problem. Without APM, it would have been much more difficult to spot the problem and eventually determine its cause. Our primary recommendation was to increase the link capacity. The secondary recommendation was to reduce the buffering to less than 4ms of data at 1Gbps.

Rapid APM Deployment

One of the benefits of using APM in a network assessment is ease of deployment. Network assessments need to happen fast. The customer is losing money and the network team is being blamed for the problem. It isn’t acceptable to wait a month while someone methodically gathers data, analyzes it, and finally writes a report. We like tools that quickly produce useful results. AppResponse Xpert is one of those tools.

Installation primarily consists of determining where in the network the application flows can be obtained. A span port is needed to provide the raw data to the APM system. In a permanent installation, we often recommend a span port aggregator, such as is sold by Gigamon, Anue/Ixia or Net Optics Director. It is useful to be able to get data from large, multi-tiered applications so that if a back-end server is slow, or there is a networking problem within the data center, it can be easily detected.

Once span data is being fed to the APM system, we determine groups of clients and groups of servers. AppResponse Xpert automatically identifies applications from the traffic. We find it useful to build a ‘business group’ of clients for each important application and a separate group of servers for the same applications. We can then work on a per-application basis, identifying each that has a problem. Do we see network-induced problems like TCP retransmissions and duplicate TCP ACKs or do we see slow server response times? We might also see that data transfer times dominate in an application, indicating that the application architecture may need to change or that higher speed links may need to be used.

Summary

We have seen how application analysis can highlight network problems that would otherwise remain hidden. Of course there are certain classes of problems that require APM instrumentation outside of the network domain. And of course, APM can’t help with network design review or identify redundancy failures — that’s where a comprehensive network assessment provides value. But the addition of APM to network assessments provides a valuable look at how the applications use the network. The result is a turbocharged network assessment, quickly delivering results that are useful to more than just the network team.

Thanks to Terry Slattery and the OPNET for this Article

Promiscuity…the real problem behind many network failures

What’s so bad about being promiscuous? After all, a variety of companies have built their reputation endorsing promiscuity.  That chorus of voices is lulling enterprises and SMBs into a false sense of efficiency and effectiveness. Of course, the promiscuity I refer to here is promiscuous mode.  Aside from Net Optics’ Phantom Virtual Tap, most solutions available today require that the virtual switch placed in promiscuous mode, which cuts switch capacity by up to 50%.  This model of network architecting demands an abundance of memory and computing power and “steals” resources from the hypervisor.  The Phantom operates in kernel, requiring minimal resources in comparison.

Why is this issue so critical to address?  Currently, there’s a lack of visibility into inter-VM traffic. With the Phantom, Net Optics released the first ever virtualization Tap, with the goal of providing the same level of visibility in the virtual network that we provide in the physical network.

Many companies operate blindly in the virtual sphere without realizing it.  The virtual networking of hypervisors leaves traffic traversing virtual networks invisible to physical tools. If you do not have fine-grained monitoring and visibility across your virtual network then you are blind to security threats, financial risks, and to the unknown. And in this case, what you don’t know can hurt you.

Inter-VM traffic is invisible to physical security and monitoring but the problem isn’t properly addressed by installing agents on every VM or using spanning virtual switch ports because they place a significant burden on the hypervisor without providing total visibility.

Inter VM visibility is best achieved with an external virtualization Tap.  Many existing solutions complement a specific set of tools or functions – limiting or eliminating your “freedom of choice” when it comes to consuming the monitored traffic.

Since the Phantom is engineered to bridge virtual traffic to physical monitoring tools, it instantly lets you find security breaches and resolve problems before they affect the integrity of your data center. As its name implies, the Phantom is non-intrusive and non-disruptive. It gets the job done without requiring virtual appliances, promiscuous probes or modification to existing environments.

Written by Bob Shaw at Net Optics​

Orascom to change name, take control of Canadian ops

Following on from Vimpelcom’s USD6.5 billion takeover of Orascom Telecom Holding (OTH) last year, the Egyptian company has said it is changing its name to Global Telecom Holding SAE. According to a report by Reuters, the two companies have also agreed to provide one another with technical and commercial services for a period of two years to improve the efficiency of their businesses. Orascom also said it planned to take control of Canada’s Globalive Investment Holding (the parent of cellco Wind Mobile), after Canada changed the law governing foreign ownership of local businesses. The law allows Orascom’s Canadian holding company to convert non-voting shares in Globalive Investment Holding into voting shares, taking its proportion of the voting shares to 65.08% from 32.02%. Orascom will submit the name change, the new arrangement with Vimpelcom and the Canadian shareholding changes to its own shareholders for approval next month.

Thanks to TeleGeography for this Article