Network Instruments State of the Network Global Study 2015

Network Instruments State of the Network 2015Eighth Annual “State of the Network” Global Study from JDSU’s Network Instruments Finds 85 Percent of Enterprise Network Teams Now Involved in Security Investigations

Deployment Rates for High-Performance Network Visibility and Software Defined Solutions Expected to Double in Two Years

Network Instruments, a JDSU Performance Management Solution released the results of its eighth annual State of the Network global study today. Based on insight gathered from 322 network engineers, IT directors and CIOs around the world, 85 percent of enterprise network teams are involved with security investigations, indicating a major shift in the role of those teams within enterprises.

Large-scale and high-profile security breaches have become more common as company data establishes itself as a valuable commodity on the black market. As such, enterprises are now dedicating more IT resources than ever before to protect data integrity. The Network Instruments study illustrates how growing security threats are affecting internal resources, identifies underutilized resources that could help improve security, and highlights emerging challenges that could rival security for IT’s attention.

As threats continue to escalate, one quarter of network operations professionals now spend more than 10 hours per week on security issues and are becoming increasingly accountable for securing data. This reflects an average uptick of 25 percent since 2013. Additionally, network teams’ security activities are diversifying. Teams are increasingly implementing preventative measures (65 percent), investigating attacks (58 percent) and validating security tool configurations (50 percent). When dealing with threats, half of respondents indicated that correlating security issues with network performance is their top challenge.

“Security is becoming so much more than just a tech issue. Regular media coverage of high-profile attacks and the growing number of malware threats that can plague enterprises – and their business – has thrust network teams capable of dealing with them into the spotlight. Network engineers are being pulled into every aspect of security, from flagging anomalies to leading investigations and implementing preventative measures,” said Brad Reinboldt, senior product manager for Network Instruments. “Staying on top of emerging threats requires these teams to leverage the tools they already have in innovative ways, such as applying deep packet inspection and analysis from performance monitoring solutions for advanced security forensics.”

The full results of the survey, available for download, also show that emerging network technologies* have gained greater adoption over the past year.

Highlights include:

  • 40, 100 Gigabit Ethernet and SDN approaching mainstream: Year-over-year implementation rates for 40 Gb, 100 Gb and SDN in the enterprise have nearly doubled, according to the companies surveyed. This growth rate is projected to continue over the next two years as these technologies approach more than 50 percent adoption. Conversely, survey respondents were less interested in 25 Gb technology, with over 62 percent indicating no plans to invest in equipment using the newer Ethernet specification.
  • Enterprise Unified Communications remains strong but lacks performance-visibility features: The survey shows that Voice-over-IP, videoconferencing and instant messaging technologies, which enable deeper collaboration and rich multimedia experiences, continue making strides in the enterprise, with over 50 percent penetration. Additionally, as more applications are virtualized and migrated to the cloud, this introduces new visibility challenges and sources that can impact performance and delay. To that end, respondents noted a lack of visibility into the end-user experience as a chief challenge. Without visibility into what is causing issues, tech teams can’t ensure uptime and return-on-investment.
  • Bandwidth use expected to grow 51 percent by 2016: Projected bandwidth growth is a clear factor driving the rollout of larger network pipes. This year’s study found the majority of network teams are predicting a much larger surge in bandwidth growth than last year, when bandwidth was only expected to grow by 37 percent. Key drivers for future bandwidth growth are being fueled by multiple devices accessing network resources and larger and more complex data such as 4K video. Real-time unified communications applications are also expected to put more strain on networks, while unified computing, private cloud and virtualization initiatives have the potential to create application overload on the backend.

Key takeaways: what can network teams do?

  • Enterprises need to be on constant alert and agile in aligning IT teams and resources to handle evolving threats. To be more effective in taking on additional security responsibilities, network teams should be trained to think like a hacker and recognize increasingly complex and nefarious network threats.
  • They also need to incorporate performance monitoring and packet analysis tools already used by network teams for security anomaly detection, breach investigations, and assisting with remediation.
  • Security threats aren’t the only thing dictating the need for advanced network visibility tools that can correlate network performance with security and application usage. High-bandwidth activities including 4K video, private clouds and unified communications are gaining traction in the enterprise as well.

State of the Network Global Study Methodology

Network Instruments has conducted its State of the Network global study for eight consecutive years, drawing insight about network trends and painting a picture of what challenges IT teams face. Questions were designed based on interviews with network professionals as well as IT analysts. Results were compiled from the insights of 322 respondents, including network engineers, IT directors, and CIOs from around the world. In addition to geographic diversity, the study’s sample was evenly distributed among networks and business verticals of different sizes. Responses were collected from December 16, 2014 to December 27, 2014 via online surveys.

Thanks to Network Instruments for the article. 

Infosim StableNet Legacy Refund Certificate (value up to $250,000.00)

Infosim StableNet Legacy Refund Certificate (value up to $250,000.00)

Are you running on Netcool, CA eHealth or any other legacy network management solutions?

$$$Stop throwing away your money$$$

Infosim® will give you a certificate (value up to $250,000) of product credit for switching from your legacy product maintenance spend.

Check whether your legacy NMS applies!

Fill out the request form and we can check whether your system matches one of the ten that qualify.

Find out your trade-up value!

Make your budget work this year!

Thank you!

Thanks to Infosim for the article.

The Importance of Using Network Discovery in your Business

Network discovery is not a single thing. In general terms it is the process of gathering information about the Network resources near you.

You may be asking why is this even important to me? The primary reasons why it is vital for your business to use network discovery is as follows:

  • If you don’t know what you have, you cannot hope to monitor and manage it.
  • You can’t track down interconnected problems.
  • You don’t know when something new comes on the network.
  • You don’t know when you need upgrades.
  • You may be paying too much for maintenance.

All of these key factors above are vital in maintaining the success of your company’s network resources.

The Importance of Using Network Discovery in your Business

One of the most important aspects which I’ve mentioned is not knowing what you have, this is a huge problem for many companies. If you don’t know what you have how can you manage or monitor it.

Most of the time in network management you’re trying to track down potential issues within your network and how you’re going to resolve these issues. This is a very hard task especially if you’re dealing with a large scale network. If one thing goes down within the network it starts a trickle effect and then more aspects of the network will in return start to go down.

All of these problems are easily fixed. NMSaaS has network discovery capabilities with powerful and flexible tools allowing you to determine what exactly is subject to monitoring.

These elements are automatically labeled and grouped. This makes automatic data collection possible, as well as threshold monitoring and reporting on already discovered elements.

As a result of this we can access critical details like IP address, MAC address, OS, firmware, Services, Memory, Serial Numbers, Interface Information, Routing Information, Neighbor data, these are all available at the click of a button or as a scheduled report.

Thanks to NMSaaS for the article.

Beat the Clock with Sapling’s TalkBack Technology

Sapling`s TalkBack Clock System

Depending on the size of your facility and how many buildings the company and/or organization occupies, you could have anywhere from 10 to 1,000+ clocks installed throughout. For a facility manager and his/her staff, that could potentially be a lot of clocks to keep an eye on. Not to mention, a whole slew of other equipment to monitor and issues that could arise within a company or organization at any moment.

Sapling has engineered a Wireless Clock System with TalkBack Technology in order to help facility departments easily monitor all the clocks within the system in order to avoid any major issues from occurring. Not only does Sapling’s Wireless TalkBack System provide all the secondary clocks with the most accurate time, this system also allows the secondary clocks to report their status back to the master clock. The secondary clocks can report back the following:

  • Current battery levels
  • Signal strength
  • Mechanical (analog) or display (digital) alerts
  • Last time the clock(s) received a signal

The clock system’s administrator can access all this information and so much more through Sapling’s master clock web interface. In order to gain access to the web interface, you must retrieve the IP address of the master clock (this is explained in detail in the master clock’s manual). Once the master clock’s IP address is located, you can type the address into a web browser of a computer that is connected to the same network as the master clock. By logging into the web interface and clicking on the TalkBack tab, there will be a list of every TalkBack Wireless clock within the system. This tab will give you the location of the clock (the clock system’s administrator will choose at the time of installation), and all the information needed to determine the status of the clocks within the system.

Sapling’s Wireless TalkBack System allows the system’s administrator to be in total control of the clock system. There is no need to wonder if the clock system is operating correctly; the facility department will have all the necessary information at their fingertips.

Thanks to Sapling for the article.

Troubleshooting Cheat Sheet: Layers 1-3

Troubleshooting Cheat Sheet: Layers 1-3

Any time you encounter a user complaint, whether regarding slow Internet access, application errors, or other issues that impact productivity, it is important to begin with a thorough understanding of the user’s experience.

Not sure where to begin? Mike Motta, NI University instructor and troubleshooting expert, places the typical user complaints into three categories: slow network, inability to access network resources, and application-specific issues.

Based upon the complaint, Motta asks questions to better understand the symptoms and to isolate the issue to the correct layer of the Open Systems Interconnection (OSI) model.

The following Troubleshooting Cheat Sheet shows the questions to ask with a typical slow network complaint.

What to Ask What it Means
What type of application is being used? Is it web-based? Is it commercial, or a homegrown application? Determines whether the person is accessing local or external resources.
How long does it take the user to copy a file from the desktop to the mapped network drive and back? Verifies they can send data across the network to a server, and allows you to evaluate the speed and response of the DNS server.
How long does it take to ping the server of interest? Validates they can ping the server and obtain the response time.
If the time is slow for a local server, how many hops are needed to reach the server? Confirms the number of hops taking place. Look at switch and server port connections, speed to the client, and any errors.

For the full Troubleshooting Cheat Sheet with extended symptom and question lists and expanded troubleshooting guidance, download Troubleshooting OSI Layers 1-3.

Quick OSI Layer Review

With these questions answered, working through the OSI model is a straightforward process. When dealing with the different layers, understanding how each layer delivers data and functions will impact how you would troubleshoot each layer.

Physical Layer

  • If it can blind or shock you, think Physical Layer
  • Defines physical characteristics of cables and connectors
  • Provides the interface between network and network devices
  • Describes the electrical, light, or radio data stream signaling

Data Link Layer

  • Converts signals into bits which become the packet data that everyone wants
  • Performs error detection and correction of the data streams
  • Manages flow and link control between the physical signaling and network
  • Constructs and synchronizes data frame packets

Network Layer

  • Controls logical addressing, routing, and packet generation
  • Carries out congestion control and error handling
  • Performs route monitoring and message forwarding

Assessing the Physical Layer

Generally speaking, Physical Layer symptoms can be classified into two groups of outage and performance issues. In most cases, investigating outage issues is the easiest place to begin, as it’s a matter of confirming the link light is out or that a box is not functioning. Additionally, validating equipment failure is a matter of replacing the cable or switch and confirming everything works.

“I can’t tell you how many Physical Layer issues are overlooked by people pinging or looking at NetFlow for the problem, when in reality it’s a Layer 1 issue caused by a cable, jack, or connector,” says Tony Fortunato, Senior Network Performance Specialist and Instructor with the Technology Firm.

The next step in investigating Physical Layer issues is delving into performance problems. It’s not just dealing with more complex issues, but also having the correct tools to diagnose degraded performance. Fortunado also urges teams to have essential testing equipment on hand. “Additionally in your tool box for physical issues be sure to have a cable tester for cabling problems. For other performance issues use a network analyzer or SNMP poller,” he says.

Assessing Physical Performance Errors

In diagnosing performance issues from a network analyzer, you’ll notice that there are patterns common with these errors, which are usually indicative of what’s causing the Physical Layer problem. These can be divided into intelligent and non-intelligent errors.

Intelligent Errors: An intelligent host is smashing into your network signal and corrupting the data.

Example: Overloaded WiFi network or a busy channel.

Non-Intelligent Errors: An outside entity causing noise that interferes with the signal or flow of data across the network.

Example: A microwave interfering with a WiFi signal.

Climbing Further up the Stack

Confirming performance problems, taking a systematic approach to troubleshooting, and understanding how communication occurs across the layers of the OSI model are key to slashing troubleshooting times and improving resolution accuracy.

Ready for the next step? Download Troubleshooting OSI Layers 1-3 for an in-depth view of troubleshooting strategies and Cheat Sheets for navigating through the first three layers of the OSI model.

Thanks to Network Instruments for the article. 

Ixia’s Virtual Visibility with ControlTower and OpenFlow

Ixia is announcing support for OpenFlow SDN in Ixia’s ControlTower architecture. Our best-in-breed Visibility Architecture now extends data center visibility by taking advantage of a plethora of qualified OpenFlow hardware.

ControlTower is our innovative platform for distributed visibility launched nearly two years ago. This solution manages a cluster of our Net Tool Optimizers (NTOs) as if you were managing a single logical NTO. At the time of its launch, we leveraged Software Defined Networking (SDN) concepts to achieve powerful distributed monitoring for data centers and campus networks. The drag and drop GUI, advanced packet processing, and patented filter compiler allow multiple users to manage and optimize traffic across the cluster without interfering with each other. We had great response from customers to the ControlTower concept; they loved how we took very complex routing and rules calculation problems and boiled them down to an easy-to-use, single-pane-of-glass GUI (or API) even when spanning across multiple NTOs.

Our announcement takes ControlTower one giant leap further by allowing qualified OpenFlow switches to become members of a ControlTower cluster, incorporating them under one powerful and simple management console, extending powerful network visibility capabilities throughout the data center. You don’t need to be an OpenFlow expert, just hook up your OpenFlow switches and we take care of the complicated management. You get all the benefits of our straightforward GUI and advanced features for the entire cluster.

We heard from many customers that scalable, cost-effective network visibility is critical to operating a secure and high performance data center. They need analytics tools that access any segment of the network quickly and easily. Monitored traffic must be filtered and optimized to ensure tools are used efficiently. Customers need to focus on optimizing application performance and heading off security issues in every part of their data center, not managing switch ACL’s, CLI’s, forwarding rulesets, interconnects, etc.

Ixia responded by enhancing ControlTower to recognize OpenFlow devices, allowing customers to scale our powerful visibility features across hundreds of OpenFlow ports. Today, ControlTower is qualified to work with HP, Dell, and Arista OpenFlow switches—and we will expand the list further in the future.

This addition to the ControlTower platform is exciting for several reasons:

  • The powerful advanced features of ControlTower can now be applied across more of your network for greater visibility.
  • You don’t need to be conversant in OpenFlow or deploy an SDN controller, we take care of all the complexity in managing the OpenFlow switches. Just hook them up and our clever software takes control of the configuration details.
  • We provide RESTful API for integration with automation.
  • You can apply features such as Dynamic Filters, Packet Deduplication, ATIP (Application Threat Intelligence Processor), TimeStamping, Packet Trimming, and Traffic Shaping to any traffic in the cluster.
  • OpenFlow is ubiquitous with Ethernet switch vendors, presenting tremendous range of deployment options
  • OpenFlow helps future proof your visibility architecture by incorporating future developments in speed, density and capacity.
  • You have the flexibility to share precious switching hardware and rack space between production and visibility networks.
  • You can easily partition a switch, with some OpenFlow ports for network visibility and some ports for normal production traffic. The production partition doesn’t even need to run OpenFlow, it can be a basic L2 Ethernet switch!
  • You can easily provision more visibility ports dynamically as your network expands or changes.
  • Ixia’s extensive OpenFlow expertise enabled us to make this advancement. Ixia was first in the testing of OpenFlow technologies with our IxNetwork product several years ago, and we have been very active in development of the OpenFlow standard.

Customers who have seen this new feature set have been very excited. ControlTower’s OpenFlow capabilities will help them reach all the corners of their data center, and provide a new flexibility to deploy network resources how they wish with all benefits of an end-to-end Network Visibility Architecture.

Additional Resources:

NTO ControlTower

Network Visibility Architecture

Thanks to Ixia for the article.

Infosim® Global Webinar Day- Return On Investment (ROI) for StableNet®

Infosim® Global Webinar Day- Return On Investment (ROI) for StableNet®

We all use a network performance management system to help improve the performance of your network. But what is the return to the operations bottom line by using or upgrading these systems? This Thursday, March 26th, Jim Duster CEO of Infosim will be holding a webinar “How do I convince my boss to buy a network management solution?”

Jim will discuss-

Why would anyone buy network management system in the first place?

  • Mapping a technology purchase to the business value of making a purchase
  • Calculating a value larger than the technology total cost of ownership (TCO)
  • Two ROI tools (Live Demo)

You can sign up for this 30 minute webinar here

March 26 4:00 – 4:30 EST

register_button

A recording of this Webinar will be available to all who register!

(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

Telus Expands Small Business Cloud Range with RingCentral

Canadian full-service telco Telus has launched ‘Business Connect’, a ‘complete integrated communications solution designed to meet the specific needs of small businesses’ offered in conjunction with cloud-based provider RingCentral, consisting of a suite of communications tools for office and mobile use. Business Connect includes local and toll-free numbers; an automated attendant and call routing; unlimited Canadian and US calling; audio and video conferencing; wireless back-up for office internet access; a single number that can be used across mobile phones, tablets, IP desk phones or PCs; as well as mobile apps to connect to business tools such as voicemail, fax and conferencing on the move. Telus’ press release says that its cloud-based solution ‘removes the cost, complexity and rigidity of on-premise systems.’ ‘The way we work is changing, and traditional enterprise phone systems can be complicated, costly and ineffective for small businesses,’ said Jim Senko, senior-vice president of small business at Telus.

Thanks to TeleGeography for the article.

Visibility Architectures Enable Real-Time Network Vigilance

A couple of weeks ago, I wrote a blog on how to use a network lifecycle approach to improve your network security. I wanted to come back and revisit this as I’ve had a few people ask me why the visibility architecture is so important. They had (incorrectly, IMO) been told by others to just focus on the security architecture and everything else would work out fine.

The reason you need a visibility architecture in place is because if you are attacked, or breached, how will you know? During a DDoS attack you will most likely know because of website performance problems, but most for most of the other attacks how will you know?

This is actually a common problem. The 2014 Trustwave Global Security Report stated that 71% of compromised victims did not detect the breach themselves—they had no idea and attack had happened. The report also went on to say that the median number of days from initial intrusion to detection was 87! So most companies never detected the breach on their own (they had to be told by law enforcement, a supplier, customer, or someone else), and it took almost 3 months after the breach for that notification to happen. This doesn’t sound like the optimum way to handle network security to me.

The second benefit of a visibility architecture is faster remediation once you discover that you have been breached. In fact, some Ixia customers have seen an up to 80% reduction in their mean time to repair performance due to implementing a proper visibility architecture. If you can’t see the threat, how are you going to respond to it?

A visibility architecture is the way to solve these problems. Once you combine the security architecture with the visibility architecture, you equip yourself with the necessary tools to properly visualize and diagnose the problems on your network. But what is a visibility architecture? It’s a set of components and practices that allow you to “see” and understand what is happening in your network.

The basis of a visibility architecture starts with creating a plan. Instead of just adding components as you need them at sporadic intervals (i.e., crisis points), step back and take a larger view of where you are and what you want to achieve. This one simple act will save you time, money and energy in the long run.

Ixia's Network Visibility Architecture

The actual architecture starts with network access points. These can be either taps or SPAN ports. Taps are traditionally better because they don’t have the time delays, summarized data, duplicated data, and the hackability that are inherent within SPAN ports. However, there is a problem if you try to connect monitoring tools directly to a tap. Those tools become flooded with too much data which overloads them, causing packet loss and CPU overload. It’s basically like drinking from a fire hose for the monitoring tools.

This is where the next level of visibility solutions, network packet brokers, enter the scene. A network packet broker (also called an NPB, packet broker, or monitoring switch) can be extremely useful. These devices filter traffic to send only the right data to the right tool. Packets are filtered at the layer 2 through layer 4 level. Duplicate packets can also be removed and sensitive content stripped before the data is sent to the monitoring tools if that is required as well. This then provides a better solution to improve the efficiency and utility of your monitoring tools.

Access and NPB products form the infrastructure part of the visibility architecture, and focus on layer 2 through 4 of the OSI model. After this are the components that make up the application intelligence layer of a visibility architecture, providing application-aware and session-aware visibility. This capability allows filtering and analysis further up the stack at the application layer, (layer 7). This is only available in certain NPBs. Depending upon your needs, it can be quite useful as you can collect the following information:

  • Types of applications running on your network
  • Bandwidth each application is consuming
  • Geolocation of application usage
  • Device types and browsers in use on your network
  • Filter data to monitoring tools based upon the application type

These capabilities can give you quick access to information about your network and help to maximize the efficiency of your tools.

These layer 7 application oriented components provide high-value contextual information about what is happening with your network. For example, this type of information can be used to generate the following benefits:

  • Maximize the efficiency of current monitoring tools to reduce costs
  • Gather rich data about users and applications to offer a better Quality of Experience for users
  • Provide fast, easy to use capabilities to spot check for security & performance problems

Ixia's Network Visibility Architecture

And then, of course, there are the management components that provide control of the entire visibility architecture: everything from global element management, to policy and configuration management, to data center automation and orchestration management. Engineering flexible management for network components will be a determining factor in how well your network scales.

Visibility is critical to this third stage (the production network) of your network’s security lifecycle that I referred to in my last blog. (You can view a webinar on this topic if you want.) This phase enables the real-time vigilance you will need to keep your network protected.

As part of your visibility architecture plan, you should investigate and be able to answer these three questions.

  1. Do you want to be proactive and aggressively stop attacks in real-time?
  2. Do you actually have the personnel and budget to be proactive?
  3. Do you have a “honey pot” in place to study attacks?

Depending upon those answers, you will have the design of your visibility architecture. As you can see from the list below, there are several different options that can be included in your visibility architecture.

  • In-line components
  • Out-of-band components
  • Physical and virtual data center components
  • Layer 7 application filtering
  • Packet broker automation
  • Monitoring tools

In-line and/or out-of-band security and monitoring components will be your first big decision. Hopefully everybody is familiar with in-line monitoring solutions. In case you aren’t, an in-line (also called bypass) tap is placed in-line in the network to allow access for security and monitoring tools. It should be placed after the firewall but before any equipment. The advantage of this location is that should a threat make it past the firewall, that threat can be immediately diverted or stopped before it has a chance to compromise the network. The tap also needs to have heartbeat capability and the ability to fail closed so that should any problems occur with the device, no data is lost downstream. After the tap, a packet broker can be installed to help traffic to the tools. Some taps have this capability integrated into them. Depending upon your need, you may also want to investigate taps that support High Availability options if the devices are placed into mission critical locations. After that, a device (like an IPS) is inserted into the network.

In-line solutions are great, but they aren’t for everyone. Some IT departments just don’t have enough personnel and capabilities to properly use them. But if you do, these solutions allow you to observe and react to anomalies and problems in real-time. This means you can stop an attack right away or divert it to a honeypot for further study.

The next monitoring solution is an out-of-band configuration. These solutions are located further downstream within the network than the in-line solutions. The main purpose of this type of solution is to capture data post event. Depending whether interfaces are automated or not, it is possible to achieve near real-time capabilities—but they won’t be completely real-time like the in-line solutions are.

Nevertheless, out-of-band solutions have some distinct and useful capabilities. The solutions are typically less risky, less complicated, and less expensive than in-line solutions. Another benefit of this solution is that it gives your monitoring tools more analysis time. Data recorders can capture information and then send that information to forensic, malware and/or log management tools for further analysis.

Do you need to consider monitoring for your virtual environments as well as your physical ones? Virtual taps are an easy way to gain access to vital visibility information in the virtual data center. Once you have the data, you can forward it on to a network packet broker and then on to the proper monitoring tools. The key here is apply “consistent” policies for your virtual and physical environments. This allows for consistent monitoring policies, better troubleshooting of problems, and better trending and performance information.

Other considerations are whether you want to take advantage of automation capabilities, and do you need layer 7 application information? Most monitoring solutions only deliver layer 2 through 4 packet data, so layer 7 data could be very useful (depending upon your needs).

Application intelligence can be a very powerful tool. This tool allows you to actually see application usage on a per-country, per-state, and per-neighborhood basis. This gives you the ability to observe suspicious activities. For instance, maybe an FTP server is sending lots of files from the corporate office to North Korea or Eastern Europe—and you don’t have any operations in those geographies. The application intelligence functionality lets you see this in real time. It won’t solve the problem for you, but it will let you know that the potential issue exists so that you can make the decision as to what you want to do.

Another example is that you can conduct an audit for security policy infractions. For instance, maybe your stated process is for employees to use Outlook for email. You’ve then installed anti-malware software on a server to inspect all incoming attachments before they are passed onto users. With an application intelligence product, you can actually see if users are connecting to other services (maybe Gmail or Dropbox) and downloading files through that application. This practice would bypass your standard process and potentially introduce a security risk to your network. Application intelligence can also help identify compromised devices and malicious botnet activities through Command and Control communications.

Automation capability allows network packet brokers to be automated to initiate functions (e.g., apply filters, add connections to more tools, etc.) in response to external commands. This automation allows a switch/controller to make real-time adjustments to suspicious activities or problems within the data network. The source of the command could be a network management system (NMS), provisioning system, security information and event management (SIEM) tool or some other management tool on your network that interacts with the NPB.

Automation for network monitoring will become critical over the next several years, especially as more of the data center is automated. The reasons for this are plain: how do you monitor your whole network at one time? How do you make it scale? You use automation capabilities to perform this scaling for you and provide near real-time response capabilities for your network security architecture.

Finally, you need to pick the right monitoring tools to support your security and performance needs. This obviously depends the data you need and want to analyze.

The life-cycle view discussed previously provides a cohesive architecture that can maximize the benefits of visibility like the following:

  • Decrease MTTR up to 80% with faster analysis of problems
  • Monitor your network for performance trends and issues
  • Improve network and monitoring tool efficiencies
  • Application filtering can save bandwidth and tool processing cycles
  • Automation capabilities, which can provide a faster response to anomalies without user administration
  • Scale network tools faster

Once you integrate your security and visibility architectures, you will be able to optimize your network in the following ways:

  • Better data to analyze security threats
  • Better operational response capabilities against attacks
  • The application of consistent monitoring and security policies

Remember, the key is that by integrating the two architectures you’ll be able to improve your root cause analysis. This is not just for security problems but all network anomalies and issues that you encounter.

Additional Resources

Thanks to Ixia for the article. 

Candela LANforge 5.3.1 Released

Candela LANforge Fire Candela LANforge ICE

New Features & Improvements

Improved association times on the ath10k 802.11AC NIC * Improved stability of 802.11AC (ath10k) driver * Support for 802.11w on virtual APs. * Improved scripting for wifi captive portal testing. …and over 25 more improvements, too

Download Release Notes here

Thanks to Candela for the article.