What is Driving Demand for Deeper Traffic Analysis?

search

During a customer review call last week, we got a very interesting quote from a US based user who offers marketing services to the retail sector: ‘We need greater insight over what is taking place on our internal network, systems, services, and external web farm seen through a single portal. We need to keep downtime to a minimum both internally and on our external customer-facing web farm. We chose LANGuardian because of its integration with SolarWinds and its deep-packet inspection capabilities.

Before discussing this in more detail, because of all the hype these days we also always ask about cloud now, so when we asked this contact about hosting these critical services in the cloud, he countered with 3 reasons for keeping them in house:

  1. Security
  2. Control
  3. Cost

When drilled on ‘cost’ he mentioned that they were shipping huge amounts of data and if hosting and storing this in the cloud, the bandwidth and storage related charges would be huge and did not make economic sense.

Back to Deeper Traffic Analysis, turns out this customer had already purchased and installed a NetFlow based product to try and get more visibility and try to focus on his critical server farm, his external/public facing environment. His business requires him to be proactive to keep downtime to a minimum and keep his customers happy. But, as they also mentioned to us: ‘With Netflow we almost get to the answer, and then sometimes we have to break out another tool like wireshark or something. Now with Netfort DPI (Deep Packet Inspection) we get the detail Netflow does NOT provide, true endpoint visibility.

What detail? What detail did this team use to justify the purchase of another monitoring product to management? I bet it was not a simple as ‘I need more detail and visibility into traffic, please sign this’! We know with tools like wireshark one can get down to a very low level of detail, down to the ‘bits and bytes’. But sometimes that is too low, far too much detail, overly complex for some people and very difficult to see the ‘wood from the trees’ and get the big picture.

One critical detail we in Netfort sometimes take for granted is the level of insight our DPI can enable into web or external traffic, does not matter if its via a CDN, or proxy or whatever, with deep packet inspection one can look deeper to get the detail required. Users can capture and keep every domain name, even URI and IP address AND critically the amount of data transferred, tie the IP address and URI to bandwidth. As a result, this particular customer is now able to monitor usage to every single resource or service they offer, who is accessing that URI or service or piece of data, when, how often, how much bandwidth the customer accessing that resource is consuming, etc.

Users can also trend this information to help detect unusual activity or help with capacity planning. This customer also mentioned that with deeper traffic analysis they were able to take a group of servers each week and really analyze usage, find the busiest server, least busy, top users, who were using up their bandwidth and what they were accessing. Get to the right level of detail, the evidence required to make informed decisions and plan.

CDN(Content Delivery Networks) usage has increased dramatically recently and are making life very difficult for network administrators trying to keep tabs and generate meaningful reports on bandwidth usage. We had a customer recently who powered up a bunch of servers and saw a huge peak in bandwidth consumption. With Netflow the domain reported was an obscure CDN and meant nothing. The LANGuardian reported huge downloads of data from windowsupdate.com from a particular IP address and also reported the user name.

What was that about justification? How about simply greater insight to reduce downtime, maximise utilisation, increase performance, reduce costs. All this means happier customers, less stress for the network guys and more money for everybody!

Thanks to NetFort for the article.

How to Deal With Unusual Traffic Detected Notifications

How to Deal With Unusual Traffic Detected NotificationsIf you get an unusual traffic detected notification from Google, it usually means your IP address was or still is sending suspicious network traffic. Google can detect this and has recently implemented security measures to protect against DDoS, other server attacks and SEO rank manipulation.

The key thing to remember is that the notification is based on your Internet facing IP address, not your private IP address which is assigned to your laptop\PC\device. If you don’t know what your Internet facing (or public) IP address is you can use something like this service.

Top tips for dealing with unusual traffic detected messages:

  1. Get an inventory. Do you have unknown devices on your network? There are many free applications which can do network scans. Another option is to deploy deep packet inspection tools which will passively detect what is running on your network.
  2. Monitor traffic on your Internet gateway. Watch out for things like network scans, traffic on unusual port numbers, TOR traffic. I have included a video below which explains how you can do this.
  3. Track down the device using its MAC address. Network switches maintain a list of what MAC addresses are associated with what network switch ports. The guide at this link shows you how to do this on Cisco switches but similar commands are available on other switch models.
  4. See if your IP address is blacklisted. You can use something like this http://www.ipvoid.com/ to see if your IP address is known black lists.
  5. If you cannot find any issues, talk to your ISP. Maybe you need an IP change. IP addresses are recycled so it could be that you were allocated a dodgy one. This is a remote possibility so make sure you cover tips 1 to 4 first.

How to Monitor Internet Activity Using a SPAN Port

Further reading

In a previous blog post I also looked at how you can use LANGuardian to track down the source of unusual traffic on your network.

Blog Post: How to deal with “Google has detected unusual traffic from your network” notifications

Please don’t hesitate to get in contact with our support team if you are having an issue with a unusual traffic notification. They can help you quickly get to the root cause of issues associated with suspicious network traffic.

Thanks to NetFort for the article.

Key Factors in NCCM and CMDB Integration – Part 2 – Change Configuration and Backup

In Part 1 of this series I discussed how an NCCM solution and a CMDB can work together to create a more effective IT inventory system. In this post, I will be taking that a step further and show how your change configuration process will benefit from integration with that same CMDB.

Key Factors in NCCM and CMDB Integration - Part 2 – Change Configuration and BackupIn general, the process of implementing IT infrastructure change happens at 3 separate stages of an assets lifecycle.

  1. Initial deployment / provisioning
  2. In production / changes
  3. Decommissioning / removal

In each of these stages, there is a clear benefit to having the system(s) that are responsible for orchestrating the change be integrated with an asset inventory / CMDB tool. Let’s take a look at each one to see why.

1. Initial Deployment / Provisioning

When a new device is ready to be put onto the network, it must go through at least one (and probably many) pre-deployment steps in order to be configured for its eventual job in the IT system. From “out of the box” to “in production” requires at least the following:

  1. Installation / turn on/ pretest of HW
  2. Load / upgrade of SW images
  3. Configuration of “base” information like IP address / FQDN / Management configuration
  4. Creation / deployment of full configuration

This may also include policy security testing and potentially manual acceptance by an authorized manager. It is best practice to control this process through an ITIL compliant system using a software application which has knowledge of what is required at each step and controls the workflow and approval process. However, the CMDB / Service desk rarely if ever can also process the actual changes to the devices. This is typically a manual process or (in the best case) is automated with an NCCM system. So, in order to coordinate that flow of activity, it is absolutely essential to have the CMDB be the “keeper” of the process and then “activate” the NCCM solution when it is time to make the changes to the hardware. The NCCM system should then be able to inform the CMDB that the activity was performed and also report back any potential issues or errors that may have occurred.

2. In Production / Changes

Once a device has been placed into production, at some point there will come a time where the device needs to have changes made to its hardware, software or configuration. Once again, the change control process should be managed through the CMDB / service desk. It is critical that as this process begins, the CMDB has been kept up today as to the current asset information. That way there are no “surprises” when it comes time to implement the changes. This goes back to having a standard re-discovery process which is performed on a known schedule by the NCCM system. We have found that most networks require a full rediscovery about 1x per week to be kept up to date –but we have also worked with clients that adjust this frequency up or down as necessary.

Just as in the initial deployment stage, it is the job of the NCCM system to inform the CMDB as to the state of the configuration job including any problems that might have been encountered. In some cases it is prudent to have the NCCM system automatically retry any failed job at least once prior to reporting the failure.

3. Decommissioning / Removal

When the time has come for a device to be removed from production and/or decommissioned the same type of process should be followed from when it was initially provisioned (but in reverse). If the device is being replaced by a newer system then the part of (or potentially the whole) configuration may just be moved to the new hardware. This is where the NCCM systems backup process will come into play. As per all NCCM best practices, there should be a regular schedule of backups that happen in order to make sure the configuration is known and accessible in case of emergency etc.

Once the device has been physically removed from the network, it must also either be fully removed from the CMDB or at the very least should be tagged as decommissioned. This has many benefits including stopping the accidental purchase of support and maintenance on a device which is no longer in service as well as preventing the NCCM system from attempting to perform discovery or configuration jobs on the device in the future (which would create a failure etc).

NCCM systems and CMDB’s really work hand in hand to help manage the complete lifecycle of an IT asset. While it could be possible to accurately maintain two non-connected systems, the time and effort involved, not to mention that much greater potential for error, makes the integration of your CMDB and NCCM tools a virtual necessity for large modern IT networks.

Top 20 Best Practices for NCCM
Thanks to NMSaaS for the article.

Infosim® Announces Release of StableNet® 7.5

Infosim StableNet Network Monitoring SoftwareInfosim®, the technology leader in automated Service Fulfillment and Service Assurance solutions, today announced the release of version 7.5 of its award-winning software suite StableNet® for Telco and Enterprise customers.

StableNet® 7.5 provides a significant number of powerful new features, including:

  • Dynamic Rule Generation (DRG); a new and revolutionary Fault Management concept
  • REST interface supporting the new StableNet® iPhone (and upcoming Android) app
  • Highly customizable dashboard in both the GUI and Web Portal
  • Enabling integration with SDN/NFV element managers
  • NCCM structurer enabling creation of optimized and well-formatted device configurations
  • New High-Availability (HA) infrastructure based on Linux HA technology
  • Syslog & Trap Forwarding enabling integration of legacy systems that rely on their original trap & syslog data
  • Open Alarms GeoMap enabling geographical representation of open alarms

StableNet® version 7.5 is available for purchase now. Customers with current maintenance contracts may upgrade free of charge as per the terms and conditions of their contract.

Supporting Quotes:

Jim Duster, CEO Infosim® ,Inc.

“We are happy that our newest release is again full of innovative features like DRG. Our customers are stating this new DRG feature will help them receive a faster ROI by improving automation in their fault management area and dramatically increase the speed of Root-Cause Analysis.”

Download the release notes here

Thanks to Infosim for the article.

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 – What’s New?

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?

Join Harald Hoehn, Senior Developer and Consultant with Infosim®, for a Webinar and Live Demo on the latest information regarding StableNet® 7.5

This Webinar will provide insight into:

StableNet® 7.5 New Features such as:

  • New Web Portal [Live Demo]
  • New Alarm Dashboard [Live Demo]
  • New NCCM Structurer [Live Demo]
  • DRG (Dynamic Rule Generation) as a new and revolutionary Fault Management concept

StableNet® 7.5 Enhancements such as:

  • Enhanced Weather Maps [Live Demo]
  • Improved Trap- and Syslog-Forwarding [Live Demo]
  • Advanced Netflow Features [Live Demo]
  • Enhanced Support for SDN

But wait – there is more! We are giving away three Amazon Gift Cards (value $50) on this Global Webinar Day. To join the draw, simply answer the trivia question that will be part of the questionnaire at the end of the Webinar. Good Luck!

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

How Can We Monitor Traffic Associated with Remote Sites?

How Can we Monitor Traffic Associated with Remote Sites?Many IT teams are now tasked with managing remote sites without having the luxury of local IT support. Business owners expect everything to be done remotely, we do live in the connected age, don’t we? Is it possible to see what is happening in these networks without the need for installing client or agent software everywhere?

You can gather some network performance information using SNMP or WMI but you will be limited to alerts or high level information. What you need is some form of deeper traffic analysis. Software applications that do traffic analysis are ideal for troubleshooting LAN and link problems associated with remote sites.

There are two main technologies available to analyze network traffic associated with remote sites, those that do flow analysis and those that capture network packets. Flow statistics are typically available from devices that can route data between two networks, most Cisco routers support NetFlow for example. If your remote networks are flat (single subnet) or you don’t have flow options on your network switches then packet capture is a viable option.

You can implement packet capture by connecting a traffic analysis system to a SPAN or mirror port on a network switch at your remote site. You can then log onto your traffic analysis system remotely to see what is happening within these networks.

How Can we Monitor Traffic Associated with Remote Sites?

NetFort LANGuardian has multiple means of capturing data associated with remote sites. The most popular option is to install an instance of the LANGuardian software at your HQ. Sensors can be deployed on physical or virtual platforms at important remote sites. Data from these is stored centrally to you get a single reference point for all traffic and security information across local and remote networks.

LANGuardian can also capture flow based statistics such as NetFlow, IPFix and SFlow, routers/switches on the remote sites can be configured to send flow traffic to LANGuardian. Watch out for issues associated with NetFlow as it has limitations when it comes to monitoring cloud computing applications.

Download White Paper

How to monitor WAN connections with NetFort LANGuardian

Download this whitepaper which explains in detail how you can monitor WAN connections with NetFort LANGuardian

How Can we Monitor Traffic Associated with Remote Sites?

How To Find Bandwidth Hogs

Thanks to NetFort for the article.

Webinar- Best Practices for NCCM

Webinar- Best Practices for NCCM

Most networks today have a “traditional” IT monitoring solution in place which provides alarming for devices, servers and applications. But as the network evolves, so does the complexity and security risks and it now makes sense to formalize the process, procedures, and policies that govern access and changes to these devices. Vulnerability and lifecycle management also plays an important role in maintaining the security and integrity of network infrastructure.

Network Configuration and Change Management – NCCM is the “third leg” of IT management with traditional Performance and Fault Management (PM and FM) being one and two. The focus of NCCM is to ensure that as the network grows, there are policies and procedures in place to ensure proper governance and eliminate preventable outages.

Eliminating misapplied configurations can reduce network performance and security issues from 90% to 10%.

Learn about the best practices for Network Configuration and Change Management to both protect and report on your critical network device configurations

  1. Enabling of Real-Time Configuration Change Detection
  2. Service Design Rules Policy
  3. Auto-Discovery Configuration Backup
  4. Regulatory Compliance Policy
  5. Vendor Default and Security Access Policies
  6. Vulnerability Optimization and Lifecycle Announcements

Date: On October 28Th.
Time: 2:00pm Eastern

Webinar- Best Practices for NCCM

Register for webinar NOW: http://hubs.ly/H01gB720

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope


5 Reasons Why You Should Include LAN Switches in Your NCCM Scope1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?
5 Perks of Network Performance Management1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

5 Perks of Network Performance Management

Thanks to NetFlow Auditor for the article.

Key Factors in NCCM and CMDB Integration – Part 1 Discovery

Part I Discovery

Key Factors in NCCM and CMDB Integration - Part 1 Discovery“I am a rock, I am an Island…” These lyrics by Simon and Garfunkel pretty appropriately summarize what most IT companies would like you to believe about their products. They are islands that stand alone and don’t need any other products to be useful. Well, despite what they want, the truth is closer to the lyrics by the Rolling Stones – “We all need someone we can lean on”. Music history aside, the fact is that interoperability and integration is one of the most important keys to a successful IT Operations Management system. Why? Because no product truly does it all; and, when done correctly, the whole can be greater than the sum of the individual parts. Let’s take a look at the most common IT asset management structure and investigate the key factors in NCCM and CMDB Integration.

Step 1. Discovery. The heart of any IT operations management system is a database of the assets that are being managed. This database is commonly referred to as the Configurations Management Database or CMDB. The CMDB contains all of the important details about the components of an IT system and the relationships between these items. This includes information regarding the components of an asset like physical parts and operating systems, as well as upstream and downstream dependencies. A typical item in a CMDB may have hundreds of individual pieces of information about it stored in the database. A fully populated and up to date CMDB is an extremely useful data warehouse. But, that begs the question, how does a CMDB get to be fully populated in the first place?

That’s where Discovery software comes in. Inventory discovery systems can be used to automatically gather these critical pieces of asset information directly from the devices themselves. Most hardware and software vendors have built in ways of “pulling” that data from the device. Network systems mainly use SNMP. Windows servers can also use SNMP as well as the Microsoft proprietary WMI protocol. Other vendors like VMware also have an API that can be accessed to gather this data. Once the data has been gathered, the discovery system should be able to transfer that data to the CMDB. It may be a “push” from the discovery system to the CMDB, or it could use a “pull” to go the other way – but there should always be a means of transfer. Especially when the primary “alternative” way of populating the CMDB is either by manually entering the data (sounds like fun) or by uploading spreadsheet csv files (but how do they get populated?).

Step 2. Updating. Once the CMDB is populated and running then you are done with the discovery software right? Um, wrong. Unless your network never changes (please email me if that is the case, because I’d love to talk to you), then you need to constantly update the CMDB. In fact, in many organizations, the CMDB has a place in it for pre-deployment. Meaning that new systems which are to come online soon are entered into the CMDB. The could news is that our discovery system should be able to get that information out of the CMDB and then use it as the basis for a future discovery run, which in turn adds details about the device back to the CMDB and so on. When implemented properly and working well, this cyclical operation really can save enormous amounts of time and effort.

In the next post in this series, I’ll explore how having an up to date asset system makes other aspects of NCCM like Backup, Configuration, and Policy Checking much easier.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.