Virtual Desktops: Network Benefits and Challenges

Network Instruments - Virtual Desktops Like most network teams, you’re probably comfortable with virtualization after having migrated servers and storage over to the virtual world. But, if your organization is now considering implementing a virtual desktop infrastructure (VDI), don’t worry, you’re in good company

While the number of firms using VDI is currently between 5-10 percentage, Gartner is forecasting that this technology will become mainstream in two years. Of particular note, a Gartner user survey conducted in 2011 found that more than 20 percent of SMB respondents planned to implement virtual desktops in the next 12 months.

How VDI Impacts Your Job

You might look at the above numbers and think, “This doesn’t apply to me. I have seen VDI and it’s expensive to deploy, speeds were slow, and the users could tell the difference between it and a regular desktop experience.”

But to steal a line from Bob Dylan, “…the times they are a-changin’.”

The latest generation of VDI technologies addresses many previous deployment and end-user challenges faced by earlier adopters. Coupled with education efforts from Citrix and VMware, there’s good reason to believe VDI will be coming to a network near you.

As with server virtualization, the implementation decision will come from elsewhere within the organization, but performance management challenges will fall to the network team to resolve. With this in mind, the following matrix outlines the top benefits and challenges of VDI from the network team’s perspective.

VDI Benefits and Challenges

Benefits Challenges
Simplified Management: Many aspects of managing users become easier; from deploying new desktops in minutes to issuing patches and upgrades. Response time is faster and the need for on-site management is greatly reduced. Handling Increased Bandwidth Demand: With VDI, the network needs to handle aggregate traffic from many desktops to a single location in the data center. Delivering graphics-intense or video applications in a virtual environment may require upgrading to 10 Gigabit in the core and Gigabit to the desktop.
Improved Data Security: Rather than residing on a local laptop that can be stolen, the data resides on a protected server separate from the user’s device. Ensuring User Experience: With processing and storage handled by a centralized server rather than the client, the delivery of results introduces a potential point of delay. To understand user experience pre-deployment, benchmark application response times. Monitor post-deployment to ensure delivery times stay in line with user expectations.
Greater Control: Lock down the desktop environment while allowing for network-approved customization by users. Collaborating with IT Teams: The centralization of infrastructure by virtualization requires teams to work together in solving performance problems. Good communication between team managers is essential for successful service delivery management.
User DR Made Easy: Thegreatest challenge in helping users recover from system meltdowns usually lies in recovering their files. You can be at the mercy of the user to back up their data regularly. With their data automatically stored on servers, this is no longer an issue and quick recovery is almost a certainty. Addressing Monitoring Complexities: Isolating the cause of performance problems can increase in difficulty with the introduction of new virtual environments and underlying physical servers. Being able to view and correlate performance between the virtual desktop and underlying hosts is essential for quick problem resolution.
User-Owned Devices: Whether laptop, home desktop, or smart phone, how users access data doesn’t matter with VDI. Users access the desktop via secure browser connection, and rights and resources can be automatically created, provisioned, and terminated. Consider Clustering and Redundancy: When a single server failure can impact thousands of users, clustering and redundant servers and connectivity is a must for minimizing the impact of outages.

Next Steps

With a basic understanding of how VDI can both streamline management processes and present new challenges to your network team, where do you go from here? Below are several resources to help you prepare for your VDI rendezvous.

Thanks to Network Instruments for this article

.

OPNET Talks End-to-End Management and Monitoring of Unified Communications

Unified Communications (UC), especially the real-time applications, is unique because of user expectations. Management of UC is very important as it ensures service availability and service performance, as well as other aspects of UC including security and compliance.

“At the end of the day, the user expectation is real-time communication services are available any time all the time,” Gurmeet Lamba, VP R&D Unified Communications Management, OPNET told TMCnet in an exclusive interview.

For example, if your phone is not connected you can’t make a phone call. A user has an expectation that the phone should work every it’s used. If you call somebody on a cell phone, but the voice is broken, you can’t complete your conversation. In both of these cases, the service performance is not adequate and the user’s expectations are not met.

“The reason the user expectation is so high is because it is so critical to the users living their lives,” said Dave Roberts, director of product management, Unified Communications, OPNET. “You car should start, your lights should turn on, your shower should run.

”End-to-End in UC means managing and monitoring the breadth and the depth of all components involved in orchestrating a successful communications session. To make this happen in today’s world, there are a number of components that are involved.

For instance, in order to complete a conference call, all parties phones must work, the network to the conferencing server must work, and the conferencing server itself must work properly.

End-to-end management and monitoring means getting visibility and performance of every single component involved in the complete communication session.  End-to-end involves management and monitoring of every component in the breadth and the depth of the communication session. The breadth consists of the applications – unified messaging, conferencing server, devices, call management servers, etc. When it comes to the depth, components include a client such as a phone or an application on your computer, the configuration of the application, the network, the virtual server, and the physical server.

“You can see the technology stack from top to bottom and from left to right. All of it has to work,” explained Lamba. “Communication is the oil that keeps everything moving

”When it comes to the most perfect End-to-End Management and Monitoring UC solution ever invented, Dave Roberts, director of product management, Unified Communications, OPNET, said, “The first goal would be to have 100 percent visibility to all of the information at all times. You would have to know every configuration of everything that is involved with the communication and every state of everything happening on the network at any instance. And then, it would have to have a way to correlate and use that information to do to a few things including detect problems, analyze information to find the cause of the problem, and fix the problem.

Article from TMCnet.com by Amada Ciccatelli  on OPNET

The Impact of Network Problems on Application Performance

Welcome to my first blog post on apmmatters.com.  I am a consultant at Chesapeake Netcraftsmen and I’ve been writing blogs for some time at Netcraftsmen about topics related to network operations and network management.  For this article, I’ll focus on the network problems that impact applications. These are problems that are relatively common, but that few people running networks seem to acknowledge as having a significant impact on applications.

I want to start by taking a look at basic application performance and causes of slow performance.  Measuring application performance as users experience it is a fundamental APM best practice, and goes beyond monitoring network performance metrics. Assuming this is done correctly, and we verify a true end-user performance issue, how does the support team determine the root cause? Let’s assume a modern, multi-tier application that includes an application user interface server, a database server, a SAN for data storage, VMotion to move the server images among several possible server hardware systems, multiple network interfaces, a multi-tier network infrastructure, and dependencies on other services like WINS or DNS for server name resolution.

It is often difficult to know where all the components are and which components are talking with which other components. For this reason, application dependency mapping is also a fundamental component of APM.  The SAN team may move the disk image from one storage system to another. There may be network contention at critical times on an important network interface. A duplex mismatch or an incorrect network teaming configuration may exist at the server’s connection to the network. Or the database queries made by the application server may be inefficient, causing large delays for some operations. A server configuration that references the address of a decommissioned DNS or WINS server may cause application slowness whenever the server attempts to use the decommissioned name server.

Network Problems That Affect Applications

Unfortunately, the IT and server teams rarely have the tools that allow them to easily determine what component of a complex application is not working correctly. There could indeed be network problems. I find that a lot of IT staff think that 1% packet loss is a small number and should not impact network traffic. So they ignore common sources of packet loss, thinking that the applications using that path won’t be adversely affected. Unfortunately, a very small amount of packet loss will have a big impact on TCP throughput, which in turn will affect the applications that depend on TCP. I recommend investigating any interface that has more than 0.0001% packet loss. The chart below shows the impact of 0.0001% packet loss on a 1Gbps link on the left. The other significant impact on throughput is the round trip time of the connection, which I’ve plotted as three separate curves.

Effect of .0001% packet on 1 Gbps Link

Error Loss

Duplex mismatch is the source of packet loss that I most frequently encounter. Many organizations still hard-code speed and duplex settings because they were burned by problems back when the standards were new and devices did not correctly auto-negotiate duplex settings. A duplex mismatch will work for low traffic volumes, but the packet loss increases significantly as the volume increases. These errors are easy to spot because the interfaces will show high FCS errors and runts on the full-duplex interface and late collisions on the half-duplex interface.

I’ve also seen bad optical patch cables cause error loss. Alcohol swabs should be used on connections to remove dust and dirt from the ends of cables. Optical cable inspection microscopes should be used on questionable cables before putting them into use. Remember to practice safe optical networking and make sure that there is no laser present when you check a connector.

Note that UDP doesn’t incorporate flow control and will continue to send packets at whatever rate the application sends them. In many cases, this makes the problem worse because more packets add to network congestion and packet loss.

Congestion Loss

Interface congestion is another significant source of packet loss. Congestion is typically caused by multiple high-speed interfaces that are trying to send data over one egress interface. The egress interface may run with little congestion during off-peak hours, reducing the daily average packet loss to a percentage that makes it look like it isn’t a significant problem. However, looking at the statistics during peak hours shows packet loss that affects the applications.

Another source of packet errors is due to congestion within the network hardware. In a recent consulting engagement, I found a set of servers with 1Gbps NICs that were clustered on consecutive ports of one switch interface card. The blade happened to be reasonably old and the server traffic was congesting the ASIC that serviced that set of ports. The result was 0.1% average packet loss with much higher peaks. The applications on these servers were TCP-based, so there was a key source of application performance problem for all the applications running on those servers. The solution was to upgrade the switch interface card or to distribute the servers to other ports on the blade so that ASIC congestion didn’t occur.

Latency

Of course, excessive latency can also have a big impact on application performance. Latency typically becomes a factor in poorly written applications that perform many back-and-forth operations. An application that requires 100 round trips to query a database for the data that it needs would work fine in the development environment where the round trip latency would be a few milliseconds. However, when the application is deployed over an MPLS WAN with 100ms round trip latency, the same function would require 10,000ms to execute. If several of these actions need to be performed in sequence, then we see a poorly performing application.

A streaming application may also experience a big performance impact just from the increase in round trip time where there is some small amount of packet loss, perhaps due to a congested WAN link. So understanding how the network operates and its impact on the application is important.

Detecting Network Problems

There are several ways to identify network problems. Legacy tools could be used to identify interfaces that have high errors, but do they identify the interfaces that are truly impacting applications? By looking at application performance itself, it is possible to identify the paths and interfaces that are having the greatest impact on an end-user’s experience.

That’s where APM technologies with application dependency mapping and network analytics become important. They allow you to identify the interfaces that are having problems that affect the applications. They can automate the diagnosis of user-level problems because they understand the transport protocols and the impact of error loss, congestion, latency, and other factors on user transactions. Some systems can even show how a change to any of these parameters will impact specific transactions. For example, you’re planning to deploy an application to the remote offices across the US from the east coast data center. Run a test case with high latency and see what kind of application response your customers will experience.

In my next post, I’ll discuss non-network sources of application performance problems.

Thanks to OPNET for this artilce

Eliminating Blind Spots and Enhancing Monitoring in the Presence of MPLS

Multiprotocol Label Switching (MPLS) operates at a layer that is generally considered to lie between traditional definitions of layer 2 (data link layer) and layer 3 (network layer), and thus is often referred to as a “layer 2.5” protocol. But the benefits of MPLS come with a price tag of reduced visibility of monitoring and security tools that were not designed to handle “layer 2.5” protocols.

Many network monitoring, analysis, and security tools are either unable to handle or have limitations when working with MPLS traffic. Those tools were designed without thinking about the increasing adoption of MPLS in large organizations’ networks. Thus, the presence of MPLS protocols in the packet streams can restrict and even limit the ability of monitoring and security tools to perform requested (and required) filtering and load balancing tasks.

Download our latest technical resource to learn how to overcome MPLS limitations.

Total Network Visibility – No Blind Spots

Net Optics The New Intelligent Network

This Session was presented by Bob Shaw at Cisco Live 2012, where he talks about what needs to change and how quickly we need to modify the network

Network Security

Even with all the latest and cutting edge technology we have in the network today, in the 15 minutes it will take you to watch this video, 10,000 customer records will be stolen, of which 9,500 will not be detected as being stolen, so this means that data is being stolen from the network and it is being discovered by a third party

So if we are purchasing the right solutions keeping it updated why we are still having network security and performance issues.

Network Visibility – The Missing Ingredient

We need to design the network to be pro-active not reactive, and to do this you need to get visibility across the entire breadth and depth of the network?

Application Monitoring

This is a growing area as this is the point at which the user intersects with network-based applications and is where new revenue streams are being created. So how do know the experience is the best it can be for your customers, while making sure that the network is safe, secure and is operating at the speed that it should?You need to ensure you have the tools and the visibility across each of the these spots in the network

Low Latency Exchange Networks

1ms can cost millions of dollars, if there is a need for an audit, you need to show what has happened with that trade, so you need complete visibility into the network to complete the audit

Cloud Networks

How do you access and maintain visibility as more data moves to the cloud?

In 2011 the industry shipped more virtual servers than they did physical servers. As you move data into the virtual world, and you have an audit, are you clear were all your blind spots are within the virtual network. As traffic moves to virtual servers you lose network visibility within the inter VM machines, are you sure you can pull all the data from the virtual machines and see that traffic.

A product called Phantom loads on the ESX server sitting at the kernel level, and gives you the ability to take that virtual traffic and pull it back to the physical tools that you already invested in. So now you can use physical security and performance tools to look at what is happening in the virtual network

Remote Branches

How do you see what is happening in the thousands of remote branches sites?

The problem at a remote site is if you are experiencing a problem and we cannot solve it over the phone, then we need to dispatch resources to the site. This causes delay in getting the problems solved, so customers experience is impacted while we dispatch to the site.

A Network Tap with monitoring software called appTap can site at the remote site, so now from a single pane of glass you can see the remote sites and virtualized world.

Network Visibility Must Have Relevancy

A lot of information is coming at your monitoring tools today. Network speeds will always outpace the ability of the tools to monitor them. Network speeds are increasing from Gig to 10Gig-40Gig-80Gig and then to 100Gig. Trying to purchase a monitoring tool to capture at these speeds is very expensive

A solution to this is a tool that can sit inline with 10-40-80 or 100 Gig inputs and in an intelligent manner be able to send the traffic to multiple tools without having the monitoring tool becoming over subscribed

So to meet these demands we total visibility within the network and the tools need be intelligent in understanding the traffic so that you have No Blind Spots and total Visibility into the Network

Xplornet celebrates second ‘4G’ satellite launch

Canadian rural broadband provider Xplornet has reported the successful launch of the EchoStar XVII satellite, the second orbiter to be launched this year on which the wireless ISP has purchased 100% of the Canadian capacity. The additional capacity will come on line in the fourth quarter for Xplornet to augment its ‘4G’ satellite/WiMAX broadband services, in particular expanding its high speed footprint across British Columbia, Manitoba, New Brunswick and Newfoundland. The next-generation satellite, launched from French Guiana, is capable of providing internet access to rural and remote subscribers with download speeds up to 25Mbps.

Thanks to TeleGeography for this Article

OpenFlow: The Next Big Network Idea?

Although similar to NetFlow in name, that’s where the similarities end. OpenFlow is an open-source programmable protocol to implement the concept of software-defined networking.

Like cloud abstracts storage, and virtualization separates apps from servers, software-defined networking tries to decouple packet-routing intelligence from the communication infrastructure. As a recent NetworkWorld article explained, “OpenFlow [is] a programmable network protocol designed to manage and direct traffic among switches from various vendors, which separates the networking data plane and hardware from the controller which tells the hardware where to forward which packets.”

What are the potential benefits of shifting to software-defined networking? Reduced costs and improved routing efficiency. Current top-of-the-line switches are expensive due to their processing power and intelligence. The thought behind OpenFlow is that in removing the logic, components, and structure from infrastructure by placing the controller elsewhere, the infrastructure can be commoditized, leading to significant cost reductions. The controller maintains a view of the entire infrastructure and makes routing decisions based on the link quality, rather than the current localized views of routing tables which typically make decisions based upon more rudimentary variables such as the number of hops in a path.

How will this change monitoring? While OpenFlow won’t impact a network manager’s life today, it’s definitely something to keep in mind. It’s something that we’re going to see more of first in large-scale environments, and in the coming years within average network environments. With the centralized perspective the controller offers, it will make monitoring simpler, potentially reducing the need for flow technologies by providing similar reporting and summarization. But it won’t provide views of packets. Your skills in analyzing packets will always be essential for troubleshooting and maintaining performance.

Thanks to Network Instruments for this article

 

Shaw’s nine-month cable revenues climb 3%

Shaw Communications, Canada’s second-largest cable TV operator by subscribers, has announced its financial and operating results for the three and nine months ended 31 May 2012. Consolidated revenue, including triple-play video, voice and internet cable operations and its Shaw Direct satellite TV division, was flat year-on-year in the three-month period at CAD1.28 billion (USD1.26 billion), while nine-month sales rose 6% to CAD3.79 billion. Quarterly total operating income before amortisation of CAD567 million declined 3% over the year-ago period, while the year-to-date figure CAD1.63 billion improved by almost 4%. Cable division revenue increased by 1% and 3% respectively to CAD794 million and CAD2.39 billion in the three- and nine-month periods, while cable operating income before amortisation for the quarter and year-to-date periods of CAD377 million and CAD1.11 billion was down 4% and 1%, respectively.

At the end of May 2012 Shaw’s basic cable customers stood at 2.236 million, down by 54,000 in the first nine months of its financial year ending August. Cable broadband internet customers increased by 29,000 in the same period to 1.906 million, while cable telephony subscribers grew by 107,000 to 1.340 million. Satellite TV customers were virtually flat at 909,000.

Thanks to TeleGeography for this atticle

 

Ensuring Visibility with Advanced Flow Technologies

In shifting to cloud and virtual environments, network teams are often faced with reduced visibility. Recent developments in flow-based technologies offer a way to minimize these obstacles. In this article we’ll look at the evolution of NetFlow and how to leverage advanced flow technologies like IPFIX, Flexible NetFlow, AppFlow, and Medianet Performance Monitor alongside packet-based analysis for complete visibility into your ever-changing network environment. Ensuring Visibility with Advanced Flow Technologies

Understanding NetFlow

NetFlow is a reporting technology created by Cisco to provide insight into traffic metrics from Cisco devices, primarily Layer 3 routers and switches. These devices move content through the IT environment and push out traffic-related NetFlow records allowing you to identify:•Content type•Content volumes •Who/What is moving or requesting content
Flow-based technologies maintain conversation records occurring across the interface, which are then periodically exported to third-party monitoring and reporting solutions. The use of flow technologies has expanded from network monitoring to a variety of applications, including security.

IPFIX

Internet Protocol Flow Information Export (IPFIX) is very similar to today’s version of Cisco’s proprietary NetFlow technology. It offers the same functionality as NetFlow v. 9 (Flexible NetFlow), but in an open, standards-based format.

Flexible NetFlow

Initially, NetFlow record formats were pre-defined, operating like a fixed spreadsheet. They had rows representing conversations and columns showing statistics about those conversations. Each column contained specific conversation details such as the number of packets, number of bytes, TCP ports, VLAN tags, etc.

In later versions of NetFlow, the concept of templates was introduced, which allowed device vendors to create their own flow records with specific attributes relevant to the device. With Flexible NetFlow, administrators can go into the router and activate reporting on specific metrics of interest, customizing reporting to the needs of their network team.

Citrix AppFlow

AppFlow, an open standard created by Citrix based on elements of IPFIX, takes advantage of flexible templates to provide in-depth info on applications primarily traversing their application delivery controller Citrix NetScaler residing inside Citrix virtual environments. AppFlow uses the same format as IPFIX but digs deeper into observed content to provide metrics like HTTP status codes and errors, average request response time, average request turn time.

Cisco Medianet Performance Monitor

Another example of a technology leveraging Flexible NetFlow is Cisco Medianet Performance Monitor. The Cisco-created agent runs inside the router or switch IOS and calculates jitter, packet loss, and MOS from calls and video sessions that it sees traversing the Medianet environment.

Going With the Flow

When migrating to cloud environments or setting up remote offices, it might be physically impossible or cost prohibitive to implement packet analysis. In these cases, flow-based technologies offer a cost-effective means to extend visibility into these environments. You may not have access to packet analysis in cloud-based environments, but you might have a Citrix NetScaler using AppFlow or a VM with a Cisco Nexus 1000V reporting NetFlow records. In remote locations it may be cost prohibitive to deploy substantial packet analysis resources, but you can tap into NetFlow records from edge routers.

Effective Resolution Requires Packets

While flow-based technologies deliver metrics and counts on network and application activities, packet analysis is essential for immediate and accurate problem resolution.  Packet analysis allows you to view performance information at a high level and then drill down to packet specifics. When troubleshooting complex application communication issues, only packet analysis provides the specific packet containing the client’s request to an application server — and the specific packet that was the server’s response. Those relationships don’t exist within flow records. Rather than clearly seeing the context and details around the conversation, flow records produce a summary of conversation details which can make connecting the dots and resolving problems extremely difficult.

Establish the Perfect Visibility Mix

The best strategy is to rely upon a mix of packet analysis and flow technologies as determined by your network environment and budget. Your performance monitoring platform should be able to seamlessly integrate packet analysis side-by-side with flow technologies for clear visibility into all environments and effective resolution. As networks increase in density and complexity, striking a balance between flow and packet monitoring will be essential for ensuring perfect performance.

Thanks to Network Instruments for this Article

Spectracom Timing System Selected for Northrop Grumman Defense Program

SecureSync platform to support the integration of multiple devices and systems using precise time and frequency signals

Rochester, NY USA June 28, 2012 — Spectracom, a business of the Orolia Group, has commenced delivery of their SecureSync® precision timing systems to Northrop Grumman. Northrop Grumman selected the SecureSync platform to support the integration of multiple devices and systems using precise time and frequency signals. Timing and synchronization is a critical interoperability parameter for different systems to operate as one integrated force.

The SecureSync platform is ideal for military systems as it provides a high level of integration and capability in a compact chassis for mobile deployments for theater-wide defense. Offering the reliability, precision, and ease-of-use needed for mission-critical operations, the system fits in a single rack unit and is expandable and upgradeable to accommodate future requirements.

Spectracom General Manager, Bill Glase, comments, “We understand the needs of our customers to manage and monitor precision timing signals that interface multiple devices and systems into one operational unit. Our approach to offer a robust and modular timing system provides an integrated solution at the lowest total cost.”