When to Dedupe Packets: Trending vs. Troubleshooting

2014-mar-banner

When it comes to points of network visibility, common knowledge dictates that more is always better. And, when we’re talking about troubleshooting and identifying the point of delay, increased visibility is important. But multiple points of visibility can also lead to duplicate packets. There are times when duplicate packets help in troubleshooting, but they can also lead to incorrect analysis and forecasting. It’s critical for successful network management to understand when duplicate packets can be your friend or foe.

To determine whether you need duplicate packets, you need to understand what type of analysis is being done: network trending or troubleshooting.

Trending

Troubleshooting

How duplicates impact network analysis Duplicate packets result in data being counted multiple times, leading to skewed trending statistics, such as application performance and utilization metrics. More time is also required to process and analyze the data When correlating packets traversing multiple segments, for example via MultiHop or Server-to-Server analysis, capturing these duplicates is critical in order to pinpoint where traffic was dropped or excessive delay was experienced.

Typically, as an engineer, you want to have access to duplicate packets when necessary for troubleshooting, but you do not want those duplicate packets to skew network traffic summary statistics. So, how do you design a monitoring infrastructure that gives you the flexibility to quickly troubleshoot while ensuring accurate trending?

1) Utilize two appliances when capturing and analyzing traffic.

The first solution should be a probe appliance, such as the Gigabit or 10 Gb Probe appliances, dedicated specifically for trending. The second solution would be a retrospective network analysis solution, such as the GigaStor, devoted to capturing all of the traffic including duplicate packets. When a problem is discovered within trending, you then have access to all the packets for troubleshooting.

2) Develop a monitoring strategy that minimizes duplicates for trending.

The advantage of avoiding duplicate packets by design is that it reduces the processing that your hardware will have to perform to remove duplicates.

  1. Be selective when choosing monitoring points.

Identify the aggregation points on your network, such as when traffic enters the core or a server farm, where the traffic naturally collapses from multiple links and devices into a single primary link. This gives you maximum visibility from a single vantage point when looking at performance or trending statistics.

  1. Don’t get too carried away with SPANs or mirror ports.

Monitoring device-to-device traffic communicating on the same switch can be tricky and will produce a lot of duplicate packets, if you are not mindful of how the data flows. Identify key paths that the data takes, such as communications to and from a front-end server to a back-end server connected to the same switch.

If you monitor all the traffic to and from both devices, you will end up with duplicate traffic. In that case, choose to mirror the traffic to and from the front-end server instead of both. This will give you the conversation between the clients and the front end as well as back end conversations.

Additionally, if you SPAN a VLAN or multiple ports, this can also cause duplicates. Spanning uplink ports or using TAPs is very useful when monitoring communication between devices that are connected to different switches.

3) When capturing packets for trending, remove duplicates via hardware.

If you’re using a network monitoring switch (or network packet broker), like Matrix, verify that it has packet deduplication. This is important if you are aggregating multiple links which throws all the traffic including duplicates into a single bucket before feeding the data to the analysis device. Additionally, if you have GigaStor, you can also utilize the Gen2 capture card to perform deduplication.

By being aware of the impact of duplicates on monitoring and implementing a strategy of dedicated hardware for trending and troubleshooting, you can guarantee forecasting and monitoring accuracy while ensuring granular and speedy troubleshooting.

Thanks to Network Instruments for the article.

UC, Enterprise Collaboration Markets Predicted to Keep Growing Fast

Unified communications is on the move.

The unified communications (UC) segment is forecast to grow from $22.8 billion in 2011 to $61.9 billion by 2018, according to Transparency Market Research.

UC is viewed by businesses of all sizes as a way to cut costs and improve both productivity and collaboration. It cuts costs by using the latest Internet technologies instead of more costly solutions such as traditional corporate video conferencing solutions, and it boosts productivity by reducing the number of places that employees need to check to stay connected with colleagues. It also enables workers to easily employ the communications medium that best suits the type of information being shared, whether chat, email, call of video conference.

Further, UC boosts collaboration by giving employees more face-time and a sense of working together in the same office even when they are on the road or working from home. UC—when properly deployed—can reduce the lost collaboration that comes from the mobility revolution, a necessary corrective as the physical office loses importance.

Unsurprisingly, then, that the enterprise collaboration market also is projected to have healthy growth in the next few years. According to research group Market and Markets, the enterprise collaboration market is expected to expand from $47.30 billion in 2014 to $70.61 billion by 2019.

The enterprise collaboration market focuses on solutions that drive this crucial UC collaboration among employees as they move out of the office and more into the field.

A third segment that will benefit from the growth of both the UC and enterprise collaboration markets is UC-as-a-service (UCaaS). Combining two of the biggest trends in IT, UC and the cloud, UCaaS is generating substantial interest among enterprise customers.

It is not hard to see why, too; one of the challenges that still plagues the UC market is its relative deployment complexity. While the promise of UC is simplifying communications, as a new technology many firms are still struggling to select the right vendors and put together a solution that works.

By using UCaaS, businesses can eliminate the setup difficulties by outsourcing it to a UCaaS provider that specializes in UC and offers it as a service instead. This offloads the complexity to the provider, making adoption turnkey for businesses.

Thanks to Enterprise Communications for the article.

JDSU’s Network Instruments Named a Leader in Gartner Magic Quadrant

JDSU Network Instruments Network Performance Monitoring Diagnostics

JDSU’s Network Instruments business unit has been positioned as a Leader in the new Gartner Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD). In the Gartner, Inc. report, Network Instruments is recognized as a network performance management leader in its category for completeness of vision and ability to execute. The Gartner Magic Quadrant is considered one of the tech industry’s most influential evaluations of enterprise network solutions.

Gartner created the new NPMD-category Magic Quadrant as an emerging solution segment responding to the added network complexities driven by the impact of virtualization and the cloud on organizations. These developments have required companies, and their network managers, to more proactively and strategically manage and optimize their network performance for a better end-user experience. As the report notes, “a fast-growing subsegment, the NPMD tool market is currently estimated by Gartner to be approximately $1 billion in size.”

“Although NPMD technologies are most often used for forensic purposes once an issue has occurred, increasingly they are also used to monitor and detect performance issues,” wrote Gartner analysts Vivek Bhalla, Jonah Kowall and Colin Fletcher in, Criteria for the New Magic Quadrant for Network Performance Monitoring and Diagnostics, July 2013. “Additionally, the need for network professionals to support demanding applications (such as voice, video, collaboration and unified communications, which are particularly sensitive to latency and quality of service issues) has driven demand in capabilities of the offerings. These tools are essential when troubleshooting and monitoring quality of service.”

“We believe being named a leader by Gartner in its NPMD Magic Quadrant on the heels of our acquisition by JDSU in January, has truly underscored 2014 as a breakout year for Network Instruments,” said Douglas Smith, newly appointed vice president and general manager of JDSU Performance Management in the Network and Service Enablement business unit at JDSU. “With this momentum, we will maintain our singular focus on the customer and continue to deliver deep and rich performance management products with outstanding customer responsiveness.”

Network Instruments’ current NPMD solution is comprised of its Network Instruments Observer (v16), Observer Reporting Server (v16), GigaStor (v16) and Observer Infrastructure (v4) products.

Thanks to Network Instruments for the article.

How Ixia Is Making Network Blind Spots Visible

nv-landingpage-banner-cropped

Between data centres, clouds and physical servers, enterprises are seeing rapidly growing complex networks. With this ever-expanding infrastructure comes the issue of visibility as data slips into blind spots when the network grows, which can be a stumbling point for businesses.

Roark Pollock, Ixia VP of Marketing, Network Visibility Solutions, spoke to CBR about how Ixia is battling the blind spots with its new Visibility Architecture solution.

Visibility Architecture is being launched to ‘help organisations take back control of their data centre’ – have they lost control?

It’s not so much about losing control, but being able to see what’s going on. I think there’s a possibility that companies have blind spots in their network and so whether you call it loss of control or of visibility, at the end of the day you can’t manage what you can’t see. So if you don’t have any visibility into those blind spots in the network or the data centre it becomes very difficult to manage and worst of all to troubleshoot.

How does Visibility Architecture help to eliminate those blind spots?

What we’re trying to do is give customers the ability to harvest data from every point in the network between the applications that sit in the data centre and the users that sit on the far edge of the network. It doesn’t matter if they’re trying to tap into that data on the physical network itself, or to the virtualised part of the data centre, which has historically been a very large blind spot for many customers. We have the ability to provide them with access solutions that give them access to data in virtualised data centre environment. Also there’s the ability to maintain and deploy security tools in a resilient fashion across the network.

When businesses move their data to the cloud, can these create more blind spots if information is not properly managed?

Yes, it does. We provide the ability for customers to maintain visibility into a private cloud environment. So we can provide them with our virtual visibility framework to maintain visiblity into that private cloud environments. But it has historically been one of the bigger issues for companies to address from a performance management standpoint.

Alternatively, does the legacy of data held within physical internal servers prove to be a problem?

It can be, but the great thing about physical servers is that they have to connect to something. The data that sits in those physical servers, when it’s used for an application, it is delivered to an end user. If it’s on a physical server, as soon as that data starts to flow, it hits the physical network, which you can see into very easily with the Visibility Architecture. You can see those packages as they traverse the physical network from when they leave the server on the network, across the core to the edge of the network where the application and the data is delivered to the end user.

Some organisations are managing their data using several different solutions, often through different services across the data centres, cloud and physical servers. Can this be a hindrance?

It absolutely is. The place where customers are having the biggest problem today is they have a number of organisations using a wide variety of different tools. For the tools to work effectively they have to be able to tap into information in different parts of the network. That creates a big demand for access to the network from all these different tools. The other problem is if I have a lot of tools, I have to buy enough capacity and even if I could tap into all these different parts of the network, that’s a tremendous amount of data for tools to absorb.

What we’re trying to do is solve that capacity problem. We’re providing the ability to harvest that end-to-end data off the network and then slice it down very finely to exactly what each one of the specific tools of the organisation needs to manage the network and the applications. So you’re actually making better use of the tool that you’ve invested in.

So is an end-to-end solution the way forward when it comes to managing data?

It’s not so much about managing data, but managing the overall underlying infrastructure and the experience they’re delivering to their users. Once they have that visibility, they can start to provide on the service level agreements that they are trying to deliver to their end users. They can manage the infrastructure better and troubleshoot problems much more quickly because they have access to data that they haven’t done in the past.

Visibility Architecture is our way of actually starting to move beyond just talking about products and presenting to our customer base a set of end-to-end solutions so they can go out there and position the overall Visibility Architecture and not just a set of individual products. It’s more than just a fabric or some connected products, it’s bringing together all of the disparate products to solve their problems.

Thanks to CBR for the article.

Public Mobile shutting down CDMA in May, moving to parent Telus’ network

Small cellco Public Mobile, a subsidiary of nationwide Canadian operator Telus, has announced that it will shut down its CDMA-based network covering Montreal, Toronto and surrounding areas in May this year, and migrate all users to its parent’s GSM/W-CDMA/HSPA/LTE network. In a notice for its subscribers on its website, Public warns users that in May they will need a new phone compatible with the Telus network to continue receiving services. It is offering a discount on the handset purchase for existing customers, whilst all its tariff plans are changing in May.

Thansk to Telegeography for the article. 

3 Technology Monitoring Tips

Telnet Networks- Network MonitoringMy new report is live for Forrester clients – Predictions For 2014: Technology Monitoring. Normally I am a bit of a skeptic when it comes to predictions, especially in regards to technology, because while they are interesting to read they can cause confusion and unnecessary deliberation for a buyer/strategist if they are not in context.

So my aim for this report was to provide some concrete advice for I&O professionals in 2014 in regards to their technology monitoring (user experience, applications and infrastructure) strategy or approach.

My top level advice is that during 2014, I&O has to concentrate on monitoring business technology which serves external customers. In fact this is not just a call for I&O professionals but also the rest of the business including marketing and eBusiness professionals. Why? Well just take a look at the near weekly media reports on “computer glitches” during 2013. These glitches meant lost revenue but more seriously impacted the brand image. Technology fuels business and this means that monitoring has to be a strategic business concern.

So to avoid your company being the next computer glitch headline you should:

1. MAKE SURE THAT YOUR MONITORING SOLUTIONS COVER MOBILE AND WEB FUELED BUSINESS SERVICES

From a mobile perspective, your monitoring solutions should provide holistic insight in regards to mobile devices and applications in terms of availability and performance down to the network/carrier level.

From a web perspective, in-depth web application monitoring down to the code level is a must.

2. ENSURE THAT YOUR MONITORING APPROACH INCLUDES END USER EXPERIENCE MONITORING

Ultimately applications and infrastructure can seemingly be performing well but what really matters is the end user/customer experience.

Many solutions offer both synthetic (simulated user) and real user monitoring. You need both to ensure holistic monitoring here. Real user monitoring can help to identify unpredicted customer behavior caused by a configuration update error – such as when Delta Air Lines website incorrectly began offering ultra-low fares at the end of last year.

3. REALIZE THAT MONITORING IS NOT ONLY FOR LIVE/PRODUCTION ENVIRONMENTS

2013 was the year in which the word DevOps was etched into our brains in the IT world. Every organization was talking about it and at times it sounded like some mythical savior that could cure all IT suffering.

I understand why DevOps is important and for me it’s an important evolution for enterprise IT. At its core, DevOps is about the need for continuous, rapid delivery of modern applications. This means that development, test and pre-production environments are becoming more fluid and you should be utilizing monitoring solutions earlier in the application development lifecycle. This will help with identifying configuration issues before they hit a live environment and more importantly, before they are experienced and in some cases exploited by your revenue generating customers.

Thanks to APM Digest for the article.

StressTest™ Performance and Load Testing for Voice Communications Solutions

Contact center and communications solutions offer companies a significant opportunity to control costs and improve customer satisfaction if implemented and maintained appropriately. If the solutions are not implemented and maintained properly, the opportunities become risks and eventually customers are negatively impacted. With StressTest™ performance and load testing, you can validate the performance of your end-to-end contact center and communications solutions to assure the best possible experience for your customers and the best possible efficiencies and savings for your company.

With IQ Services’ StressTest™ performance and load testing services, you go beyond component level testing to ensure the integrated components of the solution perform as expected in the production environment. Because you test from the outside-in or customer perspective, you know what your customers will experience when they interact with your solution and you decide when and how to optimize performance. Instead of receiving anecdotal complaints or unsubstantiated claims from an irritated customer, StressTest™ services give you robust, actionable data and the chance to observe, tune and verify performance before you implement a new system, upgrade an existing system or apply a patch/fix.

StressTest™ performance and load testing services can deliver 12 to 10,000+ simultaneous test calls over the PSTN to exercise your contact center and communications solutions just like real customers. IQ Services’ patented technology and methodologies are used to verify step responses, measure response times and capture other actionable data from each StressTest™ test call so you know how your solution performs under various levels of load.

Thanks to IQ Services for the article.

Spectracom Skylight Indoor GPS Timing System for SecureSync

Spectracom SecureSync Skylight Indoor GPS NTP AntennaOpen a New Window to Accurate Time for Your Network

Even though synchronizing network master clocks and time servers to GPS is well-known as the standard for the most time-sensitive applications, some data centers and critical server locations are not conducive to traditional roof-top GPS antenna installation. Skylight™ provides a solution. Consisting of an indoor GPS antenna panel, a configuration of Spectracom’s SecureSync® modular GPS time server, and an interconnect cable, Skylight opens new possibilities for accurate GPS time.

GPS signals are weak and often impractical in urban environments due to limited visibility of the sky, and constraints of roof-top access and long GPS antenna cable runs. Skylight is a result of Spectracom’s expertise in GPS reception and applications for precision timing. Proprietary alogorithms use a combination of high sensitivity receivers, accurate internal oscillators and other techniques to extract the GPS on-time point to sub-microsecond accuracy even in areas with somewhat limited GPS reception.

Skylight does not work everywhere. A GPS signal above about 30 dBc/N0 at the antenna panel is required. Subterranean and other radio-isolated locations will still need to be synchronized using other techniques (such as over a network via a PTP master-slave combination) versus by GPS directly. However, exterior walls or walls across from a window in above ground floors of a building are likely candidates for Skylight. Even urban canyon situations can be considered as neighboring buildings can often redirect signals to the antenna panel that contain useful timing information for the Skylight system. And the signal does not have to be available 24/7. By using a precise stable oscillator and our proprietary disciplining algorithms, Skylight needs the GPS signal for only a few hours a day. Often the RF noise is reduced at night which can allow Skylight to maintain legally traceable time even if signal acquisition is subpar during the day.

Simple Set-up

The indoor antenna panel can be mounted facing towards or away from the wall it is mounted on, or the decorative cover can be removed and it can be placed on top of a server rack, or even above ceiling tiles. It then gets connected via a RG-58 coaxial cable to a Spectracom SecureSync® modular time and frequency reference system up to 200 feet (60 meters) away. The cable length can be extending using an in-line amplifier to 600 feet (185 meters). The SecureSync is a 1RU NTP time server, PTP grandmaster, and provider of virtually any other network or physical synchronization signal. You can configure the SecureSync at time of order to suit your application for precision timing including the quality of internal oscillator and the ability to sync to GLONASS satellites in addition to, or in lieu of GPS.

Installation Tips

Whether a particular location is suitable for indoor GPS via Skylight depends on several factors. The antenna should be located as close to an exterior wall as possible. It is not necessary for the antenna to be located at a window because in many cases walls attenuate the signal less than the coatings found on modern high efficiency windows. The presence of structures surrounding a building can also affect the availability of signals in a positive or negative way. In general, the higher in the building the antenna is located, the better. Placement near a window can be advantageous if the window is free of coatings and other window coverings or blinds that will block the signal.

Learn more

Thanks to Spectracom for the article.

Leveraging a Visibility Architecture Really Can Make Your Life Easier!

Ixia recently announced its Visibility Architecture, an end-to-end approach to network, application, and security visibility. For some, the first questions asked will likely be “What is a Visibility Architecture?” and “Can it really help me?” The answer to both questions is a resounding yes!

First of all, what do we even mean by visibility? Visibility is defined by Webster as the “capability of being readily noticed” or “the degree of clearness.” By network or application visibility, for example, we are talking about the ability to readily see or quantify the performance of the network and/or the applications running over the network – usually by inspecting the packets traversing the network. Visibility in the network also reveals security vulnerabilities. This visibility is what enables IT to quickly isolate security threats and resolve performance issues, ultimately ensuring the best possible end user experience. Another way to think about this is that visibility IS what allows IT to control and optimize the network and the applications it services.

This is why network, application and security visibility are absolutely vital for any IT organization to accomplish their job! Without visibility, IT can only operate reactively to problems and will never be truly effective.

A Visibility Architecture is the end-to-end infrastructure which enables physical and virtual network, application, and security visibility. IT could piecemeal a Visibility Architecture together while fighting one fire after another as they occur, and still not have a cohesive strategy. This practice only leads to unnecessary complexity and far higher costs! With visibility so critical to IT’s primary objectives, a more strategic and proactive approach to building the Visibility Architecture makes total sense.

This is where a visibility architecture can help, since it is built for an enterprise’s current needs while also scaling and adapting to future needs. The Ixia Visibility Architecture addresses all three frameworks that make up a strategic end-to-end Visibility Architecture – network, virtual, and inline security visibility – yielding immediate benefits such as the following:

  • Eliminating blind spots
  • Controlling costs while maximizing ROI
  • Simplifying control

IXIAs Visibility Architecture

First, a full array of access solutions for both physical and virtual deployments can be leveraged so that network operators don’t have to make compromises when it comes to visibility. This removes the bottle neck caused by limited access points (SPAN ports). It is also now possible to bring visibility to your virtual (or east-to-west) traffic within the virtualized data center without sacrificing the performance of host servers. And providing reliable access to inline security tools can improve security without compromising network integrity. This eliminates the blind spots in your network that are harboring potential application and security issues.

Enterprises can maximize their investment by using powerful network packet brokers (NPB). This gives greater control to network operators and enables the ability to extend the life of existing network, application, and security tools even as you migrate to higher speed 10GE, 40GE, or even 100GE networks. Filtering, aggregation, and load balancing capabilities, for example, ensure the tools get the data they need without being overwhelmed. This helps the tools provide trustworthy and actionable insight. The right NPBs allow you to pay for tool capacity only as your network needs grow, helping to minimize your costs.

Visibility does not have to get in the way of your business initiatives. A Visibility Architecture can seamlessly integrate into your existing network management and orchestration systems. It can also extend data center automation or application management. Advanced Visibility Architectures will take advantage of the power of automation. For example, your tools can now automatically, without manual intervention, request specific types of traffic when it detects a security issue. And if a tool goes down, load balancing can automatically compensate for this by sending traffic to the remaining tools until a replacement can be deployed.

So what does a successful Visibility Architecture look like? And what are the components that make up a full end-to-end visibility architecture? Read about it in Ixia’s latest White Paper here.

Thanks to IXIA for the article.

Are You Ready for the Internet of Things?

Internet of ThingsThe Internet of Things is upon us. How can enterprise network administrators brace for impact?

Discussion of the Internet of Things – a world of connected objects, where your refrigerator sends you a text message when you’re low on milk and your basketball generates a quasi-scouting report when you’re done shooting hoops – has so far resonated most in the service provider space. What services can I sell to enable this? What infrastructure will I need to keep everything connected?

But a similar conversation is now beginning in the halls of enterprise IT.

The huge amount of data the Internet of Things promises to generate will impact enterprise IT over the next decade and beyond. While specific needs and objectives will vary by industry, some universal challenges will undoubtedly arise.

In the enterprise, device miniaturization plus plummeting technology costs equals new generations of networkable devices. Assisted living healthcare providers can use data from sensors in a home to remotely and proactively assess a patient’s condition. Perhaps a window or door has been left open for a long time in the middle of winter. Perhaps the refrigerator door hasn’t been opened for over a day. Perhaps the patient hasn’t risen for a prolonged period of time, as reported by a “smart” mattress pad.

All of this data, when collected and analyzed, can predict or identify problems and provide appropriate and expedient responses. And as more functionality becomes embedded in connected devices, enterprises will discover increasing opportunities to enhance the services they provide, improve security and identity management, and simplify billing and payment transactions. But these opportunities aren’t only limited by imagination. Practical matters associated with the networks connecting these devices and delivering this data can impose limitations on the potential of the Internet of Things, too.

So how does the enterprise network need to change to support this onslaught of data? Here’s one major consideration: up to this point, most communications networks have relied on proprietary protocols and interfaces and private networks. You can’t very well expect “things” to connect to each other over networks they can’t access. Here’s where the IP network comes into play. Whether wired or wireless, an Internet Protocol-based network is necessary to provide the scale and ubiquity to seamlessly connect people, devices, and systems.

Additionally, all the traffic these “things” generate now shares the same networks as workplace computers, tablets, and smartphones accessing email, streaming content, and downloading files. The networks themselves must become more intelligent to effectively prioritize data traffic. Configuring the network to deliver the performance required for identifying, treating, and isolating data of value is paramount.

While sensor networks have existed for some time in various verticals, Ethernet, 3G, LTE, and WiFi are all becoming ubiquitous. Machines and sensors will be connecting with the same technology and on the same networks that enterprises and consumers use today.

To extract maximum value from the Internet of Things, enterprise IT organizations will have to keep pace with new business goals, ensure reliable and predictable network connectivity and performance, and implement the right solutions to refine gathered data nto useful information. In practical terms, some of the technical and commercial considerations include:

  • Network Access. Can devices connect to the IoT reliably? Evaluate building sites, wireless network design, and RF engineering wherever possible, and include 3G, 4G, Wi-Fi, and etc.
  • Mobile Device Management. How do you effectively support and control mobile and fixed wireless devices connected to your network? BYOD policies and the proliferation of additional connected devices/sensors compound the complexity of this task.
  • Performance Management: Network performance management will be crucial to leveraging the Internet of Things. While the bandwidth consumed by sensors connected to the IoT may be dwarfed by other forms of traffic, such as mobile video, some IoT data is latency-sensitive. Moreover, on a converged wireline/wireless IP network, traffic generated by a variety of applications will compete for network resources. Protecting quality of service for business-critical applications will demand application and network performance management solutions combined with scalable mechanisms to access, identify, and classify devices, users, applications and the network resources consumed (or needed) for optimal performance.
  • Security: The Internet of Things will create an explosion in data, some of it personally identifying or otherwise sensitive. Enterprises will need a comprehensive strategy to protect that data at rest and in motion. In addition, IoT security strategies demand mechanisms to ensure secure access to network resources, combined with policy management strategies that may need to be user-, application-, device-, and network-aware.
  • Data Storage: Those zettabytes of IoT data will need to be stored somewhere. Enterprises must weigh the performance and economic tradeoffs of options like spinning hard disk drives and solid state storage technologies. They must also be able to implement and manage hybrid storage environments to meet the needs of various applications and support both traditional structured data and the growing volumes of unstructured data incoming.
  • Disaster Recovery and Business Continuity. Given the volumes of data created, enterprises will need to implement reliable DR and BC processes, taking into account recovery point objectives (RPO) and recovery time objectives (RTO) per application. This will drive the implementation of high-availability data center architectures using asynchronous and synchronous replication, likely using geographically redundant data centers. All this results in high volumes of delay for sensitive data traversing wide area networks. Enterprises will need to determine the right mix of strategies balancing the performance and costs associated with various WAN strategies. Options will include: purchasing managed wavelengths (DWDM), Carrier Ethernet and IP/MPLS VPNs from service providers. In some cases, larger enterprises may choose to invest in or lease their own dark fiber network between data centers.
  • Operational Scaling: Private, hybrid, or public cloud? While the right economic and operational strategies will vary by vertical and company, all enterprises will need to determine how much (none, some, or all) of their IoT functions will leverage cloud scaling. Some enterprises will opt to keep all functions, including device and network management and big data analytics, in-house. Others may opt to partner with service providers for a range of functions, such as mobile device connectivity, offsite disaster recovery, or SaaS for analytics.

Service providers, network operators and network equipment manufacturers will have their hands full enabling the Internet of Things over the next few years. Cisco estimates that it’s poised to be a $19 trillion market. That’s huge, and we’ve already begun to see some early traction: Google just acquired Nest for $3.2 billion to bolster its connected-home play. There’s a company that makes a smart onesie for babies so parents know if he or she has rolled over. Cities are becoming smarter and more connected than ever. While the enterprise-related discussion might not be as quirky as what we’ve seen on the consumer side, rest assured that a shift this large will definitely require some thinking on our side as well.

Thanks to Enterprise Networking Planet and JDSU for the article.