What is Driving Demand for Deeper Traffic Analysis?

search

During a customer review call last week, we got a very interesting quote from a US based user who offers marketing services to the retail sector: ‘We need greater insight over what is taking place on our internal network, systems, services, and external web farm seen through a single portal. We need to keep downtime to a minimum both internally and on our external customer-facing web farm. We chose LANGuardian because of its integration with SolarWinds and its deep-packet inspection capabilities.

Before discussing this in more detail, because of all the hype these days we also always ask about cloud now, so when we asked this contact about hosting these critical services in the cloud, he countered with 3 reasons for keeping them in house:

  1. Security
  2. Control
  3. Cost

When drilled on ‘cost’ he mentioned that they were shipping huge amounts of data and if hosting and storing this in the cloud, the bandwidth and storage related charges would be huge and did not make economic sense.

Back to Deeper Traffic Analysis, turns out this customer had already purchased and installed a NetFlow based product to try and get more visibility and try to focus on his critical server farm, his external/public facing environment. His business requires him to be proactive to keep downtime to a minimum and keep his customers happy. But, as they also mentioned to us: ‘With Netflow we almost get to the answer, and then sometimes we have to break out another tool like wireshark or something. Now with Netfort DPI (Deep Packet Inspection) we get the detail Netflow does NOT provide, true endpoint visibility.

What detail? What detail did this team use to justify the purchase of another monitoring product to management? I bet it was not a simple as ‘I need more detail and visibility into traffic, please sign this’! We know with tools like wireshark one can get down to a very low level of detail, down to the ‘bits and bytes’. But sometimes that is too low, far too much detail, overly complex for some people and very difficult to see the ‘wood from the trees’ and get the big picture.

One critical detail we in Netfort sometimes take for granted is the level of insight our DPI can enable into web or external traffic, does not matter if its via a CDN, or proxy or whatever, with deep packet inspection one can look deeper to get the detail required. Users can capture and keep every domain name, even URI and IP address AND critically the amount of data transferred, tie the IP address and URI to bandwidth. As a result, this particular customer is now able to monitor usage to every single resource or service they offer, who is accessing that URI or service or piece of data, when, how often, how much bandwidth the customer accessing that resource is consuming, etc.

Users can also trend this information to help detect unusual activity or help with capacity planning. This customer also mentioned that with deeper traffic analysis they were able to take a group of servers each week and really analyze usage, find the busiest server, least busy, top users, who were using up their bandwidth and what they were accessing. Get to the right level of detail, the evidence required to make informed decisions and plan.

CDN(Content Delivery Networks) usage has increased dramatically recently and are making life very difficult for network administrators trying to keep tabs and generate meaningful reports on bandwidth usage. We had a customer recently who powered up a bunch of servers and saw a huge peak in bandwidth consumption. With Netflow the domain reported was an obscure CDN and meant nothing. The LANGuardian reported huge downloads of data from windowsupdate.com from a particular IP address and also reported the user name.

What was that about justification? How about simply greater insight to reduce downtime, maximise utilisation, increase performance, reduce costs. All this means happier customers, less stress for the network guys and more money for everybody!

Thanks to NetFort for the article.

The Network Design and Equipment Deployment Lifecycle

As we all know, technology has a life cycle of birth, early adoption, mainstream, and then obsoletion. Even the average consumer is very in touch with this lifecycle. However, within this overarching lifecycle there are “mini” lifecycles. One of these mini lifecycles that is particularly important to enterprises is the network design and equipment deployment lifecycle. This lifecycle is the basic roadmap of how equipment gets deployed within a company data network and key a topic of concern for IT personnel. While it’s its own lifecycle, it also aligns with the typical ITIL services of event management, incident management, IT operations management, and continual service improvement.

There are 5 primary stages to the network design and equipment deployment lifecycle: pre-deployment, installation and commissioning, assurance monitoring, troubleshooting, and decommissioning. I’ll disregard the decommissioning phase in this discussion as removing equipment is fairly straightforward. The other four phases are more interesting for the IT department.
The Network Design and Equipment Deployment LifecycleThe adjacent diagram shows a map of the four fundamental components within this lifecycle. The pre-deployment phase is typically concerned with lab verification of the equipment and/or point solution. During this phase, IT spends time and effort to ensure that the equipment/solution they are receiving will actually resolve the intended pain point.

During the installing and commissioning phase, the new equipment is installed, turned on, configured, connected to the network and validated to ensure that the equipment is functioning correctly. This is typically the least costly phase to find set-up problems. If those initial set-up problems are not caught and eliminated here, it is much harder and more costly to isolate those problems in the troubleshooting phase.

The assurance monitoring stage is the ongoing maintenance and administration phase. Equipment is monitored on an as-needed or routine basis (depending upon component criticality) to make sure that it’s functioning correctly. Just because alarms have not been triggered doesn’t mean the equipment is functioning optimally. Changes may have occurred in other equipment or the network that are propagating into other equipment downstream and causing problems. The assurance monitoring stage is often linked with proactive trend analysis, service level agreement validation, and quality of service inspections.

Troubleshooting is obviously the reactionary portion of the lifecycle devoted to fixing equipment and network problems so that the network can return to an optimized, steady state condition. Most IT personnel are extremely familiar with this stage as they battle equipment failures, security threats and network outages due to equipment problems and network programming changes.

Ixia understands this lifecycle well and it’s one of the reasons that it acquired Breaking Point and Anue Systems during 2012. We have capabilities to help the IT department in all four of the aspects of the network design and equipment deployment lifecycle. These tools and services are focused to directly attack key metrics for IT:

  • Decrease time-to-market for solutions to satisfy internal projects
  • Decrease mean-time-to-repair metrics
  • Decrease downtime metrics
  • Decrease security breach risks
  • Increase business competitiveness

The exact solution to achieve customer-desired results varies. Some simple examples include the following:

  • Using the NTO monitoring switch to give your monitoring tools the right information to gain the network visibility you need
  • Using the NTO simulator to test filtering and other changes before you deploy them on your network
  • Deploying the Ixia Storm product to assess your network security and also to simulate threats so that you can observe how your network will respond to security threats
  • Deploying various Ixia network testing tools (IxChariot, IxNetwork) to characterize the new equipment and network during the pre-deployment phase

Additional Resources:

Ixia Solutions

Network Monitoring

Related Products

Ixia Net Optics Network Taps Ixia Net Tool Optmizers
Ixia Network Tap
Ixia Net Optics network taps provide access for security and network management devices.
Net Tool Optimizers
Out-of-band traffic aggregation, filtering, dedup, load balancing

Thanks to Ixia for the article.

How to Deal With Unusual Traffic Detected Notifications

How to Deal With Unusual Traffic Detected NotificationsIf you get an unusual traffic detected notification from Google, it usually means your IP address was or still is sending suspicious network traffic. Google can detect this and has recently implemented security measures to protect against DDoS, other server attacks and SEO rank manipulation.

The key thing to remember is that the notification is based on your Internet facing IP address, not your private IP address which is assigned to your laptop\PC\device. If you don’t know what your Internet facing (or public) IP address is you can use something like this service.

Top tips for dealing with unusual traffic detected messages:

  1. Get an inventory. Do you have unknown devices on your network? There are many free applications which can do network scans. Another option is to deploy deep packet inspection tools which will passively detect what is running on your network.
  2. Monitor traffic on your Internet gateway. Watch out for things like network scans, traffic on unusual port numbers, TOR traffic. I have included a video below which explains how you can do this.
  3. Track down the device using its MAC address. Network switches maintain a list of what MAC addresses are associated with what network switch ports. The guide at this link shows you how to do this on Cisco switches but similar commands are available on other switch models.
  4. See if your IP address is blacklisted. You can use something like this http://www.ipvoid.com/ to see if your IP address is known black lists.
  5. If you cannot find any issues, talk to your ISP. Maybe you need an IP change. IP addresses are recycled so it could be that you were allocated a dodgy one. This is a remote possibility so make sure you cover tips 1 to 4 first.

How to Monitor Internet Activity Using a SPAN Port

Further reading

In a previous blog post I also looked at how you can use LANGuardian to track down the source of unusual traffic on your network.

Blog Post: How to deal with “Google has detected unusual traffic from your network” notifications

Please don’t hesitate to get in contact with our support team if you are having an issue with a unusual traffic notification. They can help you quickly get to the root cause of issues associated with suspicious network traffic.

Thanks to NetFort for the article.

Key Factors in NCCM and CMDB Integration – Part 2 – Change Configuration and Backup

In Part 1 of this series I discussed how an NCCM solution and a CMDB can work together to create a more effective IT inventory system. In this post, I will be taking that a step further and show how your change configuration process will benefit from integration with that same CMDB.

Key Factors in NCCM and CMDB Integration - Part 2 – Change Configuration and BackupIn general, the process of implementing IT infrastructure change happens at 3 separate stages of an assets lifecycle.

  1. Initial deployment / provisioning
  2. In production / changes
  3. Decommissioning / removal

In each of these stages, there is a clear benefit to having the system(s) that are responsible for orchestrating the change be integrated with an asset inventory / CMDB tool. Let’s take a look at each one to see why.

1. Initial Deployment / Provisioning

When a new device is ready to be put onto the network, it must go through at least one (and probably many) pre-deployment steps in order to be configured for its eventual job in the IT system. From “out of the box” to “in production” requires at least the following:

  1. Installation / turn on/ pretest of HW
  2. Load / upgrade of SW images
  3. Configuration of “base” information like IP address / FQDN / Management configuration
  4. Creation / deployment of full configuration

This may also include policy security testing and potentially manual acceptance by an authorized manager. It is best practice to control this process through an ITIL compliant system using a software application which has knowledge of what is required at each step and controls the workflow and approval process. However, the CMDB / Service desk rarely if ever can also process the actual changes to the devices. This is typically a manual process or (in the best case) is automated with an NCCM system. So, in order to coordinate that flow of activity, it is absolutely essential to have the CMDB be the “keeper” of the process and then “activate” the NCCM solution when it is time to make the changes to the hardware. The NCCM system should then be able to inform the CMDB that the activity was performed and also report back any potential issues or errors that may have occurred.

2. In Production / Changes

Once a device has been placed into production, at some point there will come a time where the device needs to have changes made to its hardware, software or configuration. Once again, the change control process should be managed through the CMDB / service desk. It is critical that as this process begins, the CMDB has been kept up today as to the current asset information. That way there are no “surprises” when it comes time to implement the changes. This goes back to having a standard re-discovery process which is performed on a known schedule by the NCCM system. We have found that most networks require a full rediscovery about 1x per week to be kept up to date –but we have also worked with clients that adjust this frequency up or down as necessary.

Just as in the initial deployment stage, it is the job of the NCCM system to inform the CMDB as to the state of the configuration job including any problems that might have been encountered. In some cases it is prudent to have the NCCM system automatically retry any failed job at least once prior to reporting the failure.

3. Decommissioning / Removal

When the time has come for a device to be removed from production and/or decommissioned the same type of process should be followed from when it was initially provisioned (but in reverse). If the device is being replaced by a newer system then the part of (or potentially the whole) configuration may just be moved to the new hardware. This is where the NCCM systems backup process will come into play. As per all NCCM best practices, there should be a regular schedule of backups that happen in order to make sure the configuration is known and accessible in case of emergency etc.

Once the device has been physically removed from the network, it must also either be fully removed from the CMDB or at the very least should be tagged as decommissioned. This has many benefits including stopping the accidental purchase of support and maintenance on a device which is no longer in service as well as preventing the NCCM system from attempting to perform discovery or configuration jobs on the device in the future (which would create a failure etc).

NCCM systems and CMDB’s really work hand in hand to help manage the complete lifecycle of an IT asset. While it could be possible to accurately maintain two non-connected systems, the time and effort involved, not to mention that much greater potential for error, makes the integration of your CMDB and NCCM tools a virtual necessity for large modern IT networks.

Top 20 Best Practices for NCCM
Thanks to NMSaaS for the article.

How Can We Monitor Traffic Associated with Remote Sites?

How Can we Monitor Traffic Associated with Remote Sites?Many IT teams are now tasked with managing remote sites without having the luxury of local IT support. Business owners expect everything to be done remotely, we do live in the connected age, don’t we? Is it possible to see what is happening in these networks without the need for installing client or agent software everywhere?

You can gather some network performance information using SNMP or WMI but you will be limited to alerts or high level information. What you need is some form of deeper traffic analysis. Software applications that do traffic analysis are ideal for troubleshooting LAN and link problems associated with remote sites.

There are two main technologies available to analyze network traffic associated with remote sites, those that do flow analysis and those that capture network packets. Flow statistics are typically available from devices that can route data between two networks, most Cisco routers support NetFlow for example. If your remote networks are flat (single subnet) or you don’t have flow options on your network switches then packet capture is a viable option.

You can implement packet capture by connecting a traffic analysis system to a SPAN or mirror port on a network switch at your remote site. You can then log onto your traffic analysis system remotely to see what is happening within these networks.

How Can we Monitor Traffic Associated with Remote Sites?

NetFort LANGuardian has multiple means of capturing data associated with remote sites. The most popular option is to install an instance of the LANGuardian software at your HQ. Sensors can be deployed on physical or virtual platforms at important remote sites. Data from these is stored centrally to you get a single reference point for all traffic and security information across local and remote networks.

LANGuardian can also capture flow based statistics such as NetFlow, IPFix and SFlow, routers/switches on the remote sites can be configured to send flow traffic to LANGuardian. Watch out for issues associated with NetFlow as it has limitations when it comes to monitoring cloud computing applications.

Download White Paper

How to monitor WAN connections with NetFort LANGuardian

Download this whitepaper which explains in detail how you can monitor WAN connections with NetFort LANGuardian

How Can we Monitor Traffic Associated with Remote Sites?

How To Find Bandwidth Hogs

Thanks to NetFort for the article.

Webinar- Best Practices for NCCM

Webinar- Best Practices for NCCM

Most networks today have a “traditional” IT monitoring solution in place which provides alarming for devices, servers and applications. But as the network evolves, so does the complexity and security risks and it now makes sense to formalize the process, procedures, and policies that govern access and changes to these devices. Vulnerability and lifecycle management also plays an important role in maintaining the security and integrity of network infrastructure.

Network Configuration and Change Management – NCCM is the “third leg” of IT management with traditional Performance and Fault Management (PM and FM) being one and two. The focus of NCCM is to ensure that as the network grows, there are policies and procedures in place to ensure proper governance and eliminate preventable outages.

Eliminating misapplied configurations can reduce network performance and security issues from 90% to 10%.

Learn about the best practices for Network Configuration and Change Management to both protect and report on your critical network device configurations

  1. Enabling of Real-Time Configuration Change Detection
  2. Service Design Rules Policy
  3. Auto-Discovery Configuration Backup
  4. Regulatory Compliance Policy
  5. Vendor Default and Security Access Policies
  6. Vulnerability Optimization and Lifecycle Announcements

Date: On October 28Th.
Time: 2:00pm Eastern

Webinar- Best Practices for NCCM

Register for webinar NOW: http://hubs.ly/H01gB720

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

InterComms talks to Marius Heuler, CTO Infosim®, about Infosim® StableNet® and the management and orchestration of SDN and NFV environments

Marius Heuler has more than 15 years of experience in network management and optimization. As CTO and founding member of Infosim®, he is responsible for leading the Infosim® technical team in architecting, developing, and delivering StableNet®. He graduated from the University of Würzburg with a degree in Computer Science, holds several Cisco certifications, and has subject matter expert knowledge in various programming languages, databases and protocol standards. Prior to Infosim®, Marius held network management leadership positions and performed project work for Siemens, AOK Bavaria and Vodafone.

Q: The terms SDN and NFV recently have been on everybody’s lips. However, according to the critics, it is still uncertain how many telcos and enterprises use these technologies already. What is your point of view on this topic?
A: People tend to talk about technologies and ask for the support of a certain interface, service, or technology. Does your product support protocol X? Do you offer service Y? What about technology Z?

Experience shows that when looking closer at the actual demand, it is often not the particular technology, interface, or service people are looking for. What they really want is a solution for their particular case. That is why I would rather not expect anybody to start using SDN or NFV as an end in itself. People will start using these technologies once they see that it is the best (and most cost-efficient) way to relieve their pain points.

Andrew Lerner, one of the Gartner Blog Network members, recently gave a statement pointing in the exact same direction, saying that Gartner won’t publish an SDN Magic Quadrant, “because SDN and NFV aren’t markets. They are an architectural approach and a deployment option, respectively.“

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

Q: You have been talking about use cases for SDN and NFV. A lot of these use cases are also being discussed in different standardization organizations or in research projects. What is Infosim®’s part in this?
A: There are indeed a lot of different use cases being discussed and as you mentioned a lot of different standardization and research activities are in progress. At the moment, Infosim® is committing to this area in various ways: We are a member of TM Forum and recently also joined the ETSI ISG NFV. Furthermore, we follow the development of different open source activities, such as the OpenDaylight project, ONOS, or OPNFV, just to name a few. Besides this, Infosim® is part of several national and international research projects in the area of SDN and NFV where we are working together with other subject matter experts and researchers from academia and industry. Topics cover among others operation and management of SDN and NFV environments as well as security aspects. Last but not least, Infosim® is also in contact with various hardware and software vendors regarding these topics. We thereby equally look on open source solutions as well as proprietary ones.

Q: Let us talk about solutions then: With StableNet® you are actually quite popular and successful in offering a unified network management solution. How do SDN and NFV influence the further development of your offering?
A: First of all, we are proud to be one of the leading manufacturers of automated Service Fulfillment and Service Assurance solutions. The EMAtm has rated our solution as the most definitive Value Leader in the EMAtm Radar for Enterprise Network Availability Monitoring Systems in 2014. We do not see ourselves as one of the next companies to develop and offer their own SDN controller or cloud computing solution. Our intent is rather to provide our well-known strength in unified network management for the SDN/NFV space as well. This includes topics like Service Assurance, Fault Management, Configuration, and Provisioning, Service Modelling, etc.

Q: Are there any particular SDN controller or cloud computing solutions you can integrate with?
A: There is a wide range of different SDN controllers and cloud computing solutions that are currently of general interest. In its current SDN controller report the SDxcentral gave an overview and comparison of the most common open source and proprietary SDN controllers. None of these controllers can be named as a definite leader. Equally regarding the NFV area, the recent EMAtm report on Open Cloud Management and Orchestration showed that besides the commonly known OpenStack there are also many other cloud computing solutions that enterprises are looking at and think of working with.

These developments remind me of something that, with my experience in network management, I have known for over a decade now. Also when looking at legacy environments there have always been competing standards. Despite years of standardization activities of various parties, often none of the competing standards became the sole winner and rendered all other interfaces or technologies obsolete. In fact, there is rather a broad range of various technologies and interfaces to be supported by a management system.

This is one of the strengths that we offer with StableNet®. We currently support over 125 different standardized and vendor-specific interfaces and protocols in one unified network management system. Besides this, with generic interfaces both for monitoring and configuration purposes we can easily integrate with any structured data source by the simple creation of templates rather than the complicated development of new interfaces. This way, we can shift the main focus of our product and development activities to the actual management and orchestration rather than the adaption to new data sources.

Q: Could you provide some examples here?
A: We continuously work on the extension of StableNet® with innovative new features to further automate the business processes of our customers and to simplify their daily work. Starting from Version 7, we have extended our existing integration interfaces by a REST API to further ease the integration with third party products. With Dynamic Rule Generation, Distributed Syslog Portal, and Status Measurements we offer the newest technologies for an efficient alarming and fault management. Our StableNet® Embedded Agent (SNEA) allows for an ultra-scalable, distributed performance monitoring as well as for the management of IoT infrastructures. Being part of our unified network management solution, all these functionalities, including the ultra-scalable and vendor-agnostic configuration management, can equally be used in the context of SDN and NFV. A good way to keep up-to-date with our newest developments are our monthly Global Webinar Days. I would really recommend you to have a look at those.

Q: As a last question, since we have the unique chance to directly talk with the CTO of Infosim®, please let us be a little curious. What key novelties can people expect to come next from Infosim®?
A: There are of course many things that I could mention here, but the two areas that will probably have the most significant impact on management and orchestration are our new service catalog and the new tagging concept. With the service catalog the management is moved from a rather device- or server-based perspective to a holistic service-based view. This tackles both the monitoring and the configuration perspective and can significantly simplify and speed up common business processes. This is of course also related to our new tagging concept.

This new approach is a small revolution to the way that data can be handled for management and orchestration. We introduce the possibility for an unlimited number of customizable tags for each entity, let it be a device, an interface, or an entire service, and combine this with automated relations and inheritance of proprieties between the different entities. Furthermore, the entities can be grouped in an automated way according to arbitrary tag criteria. This significantly extends the functionality, usability, and also the visualization possibilities.

Thanks to InterComms for the article.