How Datacenter Consolidation Changes the Monitoring Game

2013-jun-banner
During the planning stages, datacenter consolidation conversations typically focus on quantifying IT gains and resolving any immediate deployment obstacles. Often overlooked in this discussion is how the new ultra-high density environment will be effectively monitored, once all the highly paid IT consultants move on to their next gig.
Here’s a look at the primary paths of datacenter consolidation, and an exploration of the monitoring challenges and considerations for each environment. In working with other IT teams and consultants, you are the voice for ensuring visibility into applications and great end-user experience is incorporated into the initial consolidation plan.

Consolidation Options

There are two paths to consolidation: internally hosted or outsourced (think cloud or third-party provider). Let’s look at each from the monitoring perspective.

Internally-Hosted Consolidations

In-house hosting typically involves combining physically distributed computing and storage assets into a concentrated, blade-server based, virtualized environment. Add capabilities such as utilizing vMotion, and you have the makings of a monitoring black hole. How so, you may ask?
To begin, think of the classic three-layer architecture: core, distribution, and access. In a consolidated setting, this distributed design physically collapses into a wall of racks. When highly virtualized servers are thrown in the mix, it makes things more interesting from a monitoring perspective, as user traffic appears to disappear. In this potentially hidden, virtualized realm, a lot of action occurs at the application level unbeknownst to the engineer. For example, in a virtualized multi-tier app, the web frontend makes supporting calls to databases and other applications, and then generates a service response. However, the only thing seen by the external monitoring device is the response traveling back to the user. Unfortunately, tracking the data only as it enters and leaves the virtual environment makes for poor monitoring.
To solve this riddle, you’ll need to regain all your previous monitoring points at the core, access, and distribution layers. As you stand in front of your gleaming new racks, it’s all still there—the trick is to locate physically where these three logical constructs exist within the mass of devices and determine how best to extract the relevant data for your monitoring tools. Similarly, you’ll need to determine this at the virtualized server level with the added complexity of multi-tier apps abstracted and running on and between these devices. You will need sufficient information about the application architecture to effectively place instrumentation to quantify the application health and status. In the phrasing of a network designer, your legacy north-south (access-distribution-core) flow has a second dimension of east-west flow across virtualized servers.
Visibility is usually achieved via a combination of SPAN port and TAPs (discrete or virtual). Your unique IT implementation and application tier structure will dictate how this is best achieved for effective instrumentation. For example, you may decide to access network data at the top-of-rack switch point or within the core with a TAP. Alternatively, if traffic loads are not excessive, using the integrated SPAN capabilities of switch vendors may work. For views into virtual inter-server traffic, many vendors also offer virtual SPANs or TAPs with their solutions; probes running on an individual VM are yet another means to assessing this status.
Compounding the monitoring obstacles, the primary inbound and outbound connections of these dense entities are often heavily utilized 10 Gb or 40 Gb links capable of overloading all but the fastest packet-capture appliances. In addition, for larger datacenters, network packet brokers (matrix switches) are often required to realistically manage the many instrumentation points. Lastly, if you’re utilizing vMotion, another consideration is whether to monitor vMotion events via the vCenter console API to maximize application visibility when a service is provisioned to a different physical server.
Fortunately, if you are employing polling technologies as part of your resource monitoring, the option to interrogate networked attached devices remains available after consolidation. If not, this is usually an excellent time to consider adding the functionality for its deep IT infrastructure awareness and ability to cross-correlate with packet-based performance metrics.

Externally-Hosted Consolidations

Should you choose to outsource your IT infrastructure, your monitoring abilities will be dependent on your provider. If they allow you to install instrumentation—not likely—then everything mentioned above for internal hosting would apply. Much more common, however, is to be restricted to a customer portal or an API that provides a view of select operating metrics. The depth and breadth of what is presented is highly dependent on the service provider. For example, Amazon Web Services offers considerable details through its CloudWatch interface. Others may more offer more or less. Regardless, it is important to leverage what is available and to augment with other information including SLA agreements with the Internet service vendor. Monitoring the ingress point of application and service conversations as they arrive at your facility is also a best practice, and can be used to validate whatever agreements are in place with the providers.

Conclusion

Datacenter consolidation offers significant value for business and is either already part of your life or probably will be soon. The good news is by performing careful due diligence, your monitoring capabilities can remain robust and your application and service delivery levels exceptional. Keep network service in sight and in mind.
Thanks to Network Instruments for the article.

Verizon continues Canadian acquisition hunt

Verizon Communications is continuing to explore potential acquisition opportunities in the Canadian mobile market, and has tabled an initial buyout offer for Wind Mobile of roughly USD700 million whilst also beginning negotiations with Mobilicity’s owners, according to two sources familiar with the situation quoted by the Globe & Mail. Last week, Verizon’s chief financial officer Fran Shammo confirmed that the US giant was examining the possibility of entering Canada, although noting that ‘regulatory issues’ could present problems. There is no legal obstacle to Verizon buying one of the smaller Canadian cellcos, although potentially the federal government may stipulate conditions attached to an acquisition of two or more companies competing in the same regions when Industry Canada announces further mobile licence transfer policy details in the coming few days. Neither of Wind, Mobilicity or Wind’s majority equity/minority voting share owner Orascom Telecom Holding (part of Vimpelcom Group) have commented publicly on talks with Verizon.

Thanks to TeleGeography for the article.

Software-Defined Overload – Are you Ready?

If you haven’t already heard, there’s a new IT in town.  Cloud, Mobility, Big Data, and Security are shaping the way you plan, deploy, and share data in the new age world.  Software-Defined Networks, Software-Defined Storage, Software-Defined Servers, Software-Defined Security and Software-Defined Data Centers have simplified the automation of compute, giving you the power of scalability and interoperability with the click of your mouse faster than ever before.  However, this simplicity brings infrastructure sprawl – software-defined ‘solutions’ make it easier to deploy more servers and applications which in turn means more copies of sensitive information and a larger attack surface.  Breaches to data security may lead to violations of compliance and privacy rules, resulting in potentially multi-million dollar fines and severely tarnishing a well-established brand.

Don’t get me wrong though, next generation SDDC’s are perfect for cloud environments as it provides platforms to virtualize networks, storage, and servers.  Implementations result in reducing overall CapEx/OpEx costs while maximizing automated workload provisioning, application security, and system resource pools.  The bottom line is to realize the importance of tracking and being able to see exactly what type of data is traversing your network, where it is going, who is looking at it, and what the end results are when that data finally reaches its end point.
At Net Optics, we have seen this vision before the software-defined ‘era’ became what it is today.  Our Phantom Virtualization Tap and Network Packet Broker appliances have been enabling customers to gain full visibility into their hybrid datacenters as they prepare and ready themselves for this wave.  Are you ready for the rush?
Thanks to Net Optics for the article.

Mobilicity recapitalisation vote moved to 3 July

Struggling Canadian cellco Mobilicity has delayed until 3 July the date on which its debtholders will vote on a proposed recapitalisation plan, replacing a previous date of 25 June, to ‘allow stakeholders the opportunity to review Industry Canada guidelines, regarding the transfer of wireless spectrum, prior to the vote.’ Earlier in the month larger rival Telus was blocked from acquiring Mobilicity under a federal policy not to allow the transfer of recent market entrants’ wireless spectrum to national incumbents.

Thanks TeleGeography for the article

Secure cloud-to-cloud migration essential to poaching cloud customers

The cloud services market is steadily maturing and becoming increasingly competitive, but customer lock-in is still common. To take customers from competitors, cloud providers must offer secure cloud-to-cloud migration.

Providers that lead with security and transparency features may not only differentiate themselves in the crowded market, but could also poach customers that are unhappy with their current provider.

Cloud-to-cloud migration: Security fears lock in customers

Enterprises often develop a comprehensive cloud migration strategy when they first do business with cloud providers, but they typically lack a plan for leaving that provider if they are unhappy with the service, said Ed Moyle, founding partner of the analyst firm Security Curve.

“In practice, this means there’s a challenge associated with moving services out of a given cloud environment … or [moving to] another provider,” he said.

Providers who engage with unhappy customers of the competition should lead with security — a concern for every cloud customer — and then move into more differentiated offerings that will target a specific segment of customers, said Geoff Webb, director of solution strategy for NetIQ, a Houston-based provider of Disaster Recovery as a Service, security and workload management software for enterprises and cloud service providers.

Securing data at rest is different than securing data as it is transmitted to a provider’s environment or between cloud providers. While on the move, data loss is also a concern.

When switching to a new provider or adopting a multi-cloud strategy, customers are worried about where their data might end up, Webb said. “While a service-level agreement [SLA] is great, more visibility can unlock the potential for adoption of a [provider’s] services if the provider can address those concerns early.”

NetIQ offers software-based security add-on services to help providers differentiate their Infrastructure as a Service offerings. “NetIQ software gives providers the ability to be very specific about where customer data is being held … and makes it very easy for providers to onboard and ramp-up their customers,” said Mike Robinson, senior solution marketing manager for NetIQ.

“The way providers can really compete in the cloud market now is by letting customers move into the cloud in the way they want to,” NetIQ’s Robinson said.

Cloud-to-cloud migration: Give customers the freedom to roam

Secure cloud-to-cloud migration is a good appetizer, but customers want a comprehensive security strategy, too, said Mike Chapple, senior director of enterprise support services at the University of Notre Dame in South Bend, Ind.

“Securing data moving between providers is important, but it’s only a short period of time,” he said. “Users want to know that the cloud provider they are switching to is secure and compliant throughout the entire amount of time they will be maintaining their data or running the [provider’s service].”

Even though data exchange to a new cloud provider may not necessarily be any more risky than a customer’s initial move to the cloud, secure cloud-to-cloud migration is still a chance for a provider to differentiate.

The provider should also emphasize open standards so customers can limit the amount of re-engineering they have to do to their applications after a cloud-to-cloud migration. These open standards will make it easier for customers to shop around, so providers will have to stay vigilant, Security Curve’s Moyle said.

Thanks to Tech Target for the article.

Net Optics Streamlines 40G Migration and SDN Integration with New xStream 40 Solution and Advances AA-NPM with New Version of Spyke

At Interop, Net Optics to demonstrate xStream 40™ capabilities for 40G and,10G networks, as well as Spyke™ AA-NPM support for network security and availability
Net Optics, Inc., the leading provider of Total Application and Network Visibility solutions, announces introduction of the xStream 40 Network Packet Broker (NPB)—delivering exceptional performance and significant new capabilities. Plus, Net Optics announces additions to the Spyke Application-Aware Network Performance Monitoring (AA-NPM) platform. Attendees at Interop Las Vegas can view demonstrations in booth #1559.
xStream 40™ provides extensive NPB capabilities for 40G networks, including advanced filtering, aggregation, load-balancing, time stamping and more. The scalable, high-density platform is being added to the Net Optics portfolio, providing 30 percent improved latency over the previous ultra-low latency record. xStream 40 also provides twenty percent more filters and 2.6X higher port density with the same form factor. This boosts productivity, cost savings and convenience, offering cost-effective ways to accelerate and manage. xStream 40 simplifies large-scale network management and network monitoring tasks, while sharing growing traffic loads among multiple tools.
xStream 40 is loaded with features that will make customer migration to 40G networks easier. It is the most cost-effective method to perform 40G monitoring with existing 10G tools—protecting existing investment while maintaining security and performance,” says Sharon Besser, Net Optics VP of Technology.
Additional features include:
Delivery of up to 64 wire-speed 1 GbE and 10 GbE ports, or 48 wire-speed 1 GbE and 10 GbE ports and 4 40 GbE ports
Improved hardware includes support for IEEE 1588, Precision Time Protocol and time stamping on ingress
Time stamping of packets on the fly with nanosecond accuracy and packet manipulation options on all ports at line rate
Integrated capabilities for streamlining SDN migration supporting multiple management and integration protocols
Ability to leverage the supersized 1.28Tbps backplane to process more traffic at faster speeds while maximizing network performance and security
SPYKE 1.5 delivers AA-NPM supporting 10G networks and is designed to monitor applications and networks of companies from small branch sites to 10G Enterprise-wide networks. New support for 10G traffic monitoring allows Enterprise customers to deploy a plug-and-play, cost effective solution to complement current monitoring infrastructure.
AA-NPM demands deep diving into packet payloads to identify the underlying application and extract relevant performance indicators. Spyke 1.5’s DPI engine adds coverage for hundreds of applications, letting customers examine and extract application meta-data for fast troubleshooting and problem resolution.
“Most monitoring tools see packet headers and provide flow information only,” says Dave Britt, Net Optics Director of APM Technologies, “whereas DPI is like opening a letter to see the content.”
Spyke provides meaningful metadata, identifying not only how well applications are performing but also what users are actually doing. So a customer can easily find “bandwidth hogs” and also extract Google search strings that users are entering, for example. Key enhancements let customers quickly understand which protocols and services are running in their networks.
The first units of xStream 40 and Spyke 1.5 are expected to ship in early Q3.
THanks to Net Optics for the article.

Vimpelcom abandons Wind control plan

Egypt-based Orascom Telecom Holding (OTH), a majority owned subsidiary of Russian-backed, Amsterdam headquartered Vimpelcom Group, issued a statement yesterday withdrawing its previous request to the Canadian government to acquire full control of its part-owned subsidiary in the country, Wind Mobile. The statement reads: ‘Further to its prior announcements on the proposed acquisition of control of Wind Mobile Canada, OTH announces that, after a review process and discussions with the government of Canada, it has decided to withdraw its application for Investment Canada Act approval of its acquisition of control of Wind Mobile. OTH continues to be interested in consolidating its interest in Wind Mobile and in working with the government.’

No further explanation for the move was offered by Vimpelcom, although it has been claimed by commentators that ‘national security concerns’ are behind the Canadian government’s delay in approving the full takeover of Wind, which was proposed in January. Negative factors could, it has been claimed, include the Russian majority ownership of Vimpelcom, and the Chinese-built technology Wind’s network is largely based on, which has recently come under scrutiny for national security reasons regarding other networks in North America. Wind’s CEO Anthony Lacavera reacted to yesterday’s announcement by confirming that he will continue to control two-thirds of the cellco’s voting shares, and added that: ‘Despite this speed bump, I’m going to continue to work with Vimpelcom toward our mutual objectives… It doesn’t change my long-term commitment to competition in the market.’ Quoted by Bloomberg in an interview, Lacavera also commented on the ‘national security’ rumours, saying that ‘Vimpelcom made statements about there never having been a [network security] breach… and I would echo those statements. Cybersecurity threats are by far one of the biggest threats facing our nation. This is an ongoing, iterative process.’ Meanwhile, Industry Minister Christian Paradis – responsible for federal telecoms policy decisions – simply confirmed in a statement that OTH’s application had been withdrawn and that the review process was finished, without elaborating on reasons.

Thanks to TeleGeography for the article.