What is Driving Demand for Deeper Traffic Analysis?

search

During a customer review call last week, we got a very interesting quote from a US based user who offers marketing services to the retail sector: ‘We need greater insight over what is taking place on our internal network, systems, services, and external web farm seen through a single portal. We need to keep downtime to a minimum both internally and on our external customer-facing web farm. We chose LANGuardian because of its integration with SolarWinds and its deep-packet inspection capabilities.

Before discussing this in more detail, because of all the hype these days we also always ask about cloud now, so when we asked this contact about hosting these critical services in the cloud, he countered with 3 reasons for keeping them in house:

  1. Security
  2. Control
  3. Cost

When drilled on ‘cost’ he mentioned that they were shipping huge amounts of data and if hosting and storing this in the cloud, the bandwidth and storage related charges would be huge and did not make economic sense.

Back to Deeper Traffic Analysis, turns out this customer had already purchased and installed a NetFlow based product to try and get more visibility and try to focus on his critical server farm, his external/public facing environment. His business requires him to be proactive to keep downtime to a minimum and keep his customers happy. But, as they also mentioned to us: ‘With Netflow we almost get to the answer, and then sometimes we have to break out another tool like wireshark or something. Now with Netfort DPI (Deep Packet Inspection) we get the detail Netflow does NOT provide, true endpoint visibility.

What detail? What detail did this team use to justify the purchase of another monitoring product to management? I bet it was not a simple as ‘I need more detail and visibility into traffic, please sign this’! We know with tools like wireshark one can get down to a very low level of detail, down to the ‘bits and bytes’. But sometimes that is too low, far too much detail, overly complex for some people and very difficult to see the ‘wood from the trees’ and get the big picture.

One critical detail we in Netfort sometimes take for granted is the level of insight our DPI can enable into web or external traffic, does not matter if its via a CDN, or proxy or whatever, with deep packet inspection one can look deeper to get the detail required. Users can capture and keep every domain name, even URI and IP address AND critically the amount of data transferred, tie the IP address and URI to bandwidth. As a result, this particular customer is now able to monitor usage to every single resource or service they offer, who is accessing that URI or service or piece of data, when, how often, how much bandwidth the customer accessing that resource is consuming, etc.

Users can also trend this information to help detect unusual activity or help with capacity planning. This customer also mentioned that with deeper traffic analysis they were able to take a group of servers each week and really analyze usage, find the busiest server, least busy, top users, who were using up their bandwidth and what they were accessing. Get to the right level of detail, the evidence required to make informed decisions and plan.

CDN(Content Delivery Networks) usage has increased dramatically recently and are making life very difficult for network administrators trying to keep tabs and generate meaningful reports on bandwidth usage. We had a customer recently who powered up a bunch of servers and saw a huge peak in bandwidth consumption. With Netflow the domain reported was an obscure CDN and meant nothing. The LANGuardian reported huge downloads of data from windowsupdate.com from a particular IP address and also reported the user name.

What was that about justification? How about simply greater insight to reduce downtime, maximise utilisation, increase performance, reduce costs. All this means happier customers, less stress for the network guys and more money for everybody!

Thanks to NetFort for the article.

Advertisements

ThreatARMOR Reduces Your Network’s Attack Surface

ThreatARMOR Reduces Your Network’s Attack Surface

2014 saw the creation of more than 317 million new pieces of malware. That means an average of nearly one million new threats were released each day.

Here at Ixia we’ve been collecting and organizing threat intelligence data for years to help test the industry’s top network security products. Our Application and Threat Intelligence (ATI) research center maintains one of the most comprehensive lists of malware, botnets, and network incursions for exactly this purpose. We’ve had many requests to leverage that data in support of enterprise security, and this week you are seeing the first product that uses ATI to boost the performance of existing security systems. Ixia’s ThreatARMOR continuously taps into the ATI research center’s list of bad IP sources around the world and blocks them.

Ixia’s ThreatARMOR represents another innovation and an extension for the company’s Visibility Architecture, reducing the ever-increasing size of their global network attack surface.

A network attack surface is the sum of every access avenue an individual can use to gain access to an enterprise network. The expanding enterprise security perimeter must address new classes of attack, advancing breeds of hackers, and an evolving regulatory landscape.

“What’s killing security is not technology, it’s operations,” stated Jon Oltsik, ESG senior principal analyst and the founder of the firm’s cybersecurity service. “Companies are looking for ways to reduce their overall operations requirements and need easy to use, high performance solutions, like ThreatARMOR, to help them do that.”

Spending on IT security is poised to grow tenfold in ten years. Enterprise security tools inspect all traffic, including traffic that shouldn’t be on the network in the first place: traffic from known malicious IPs, hijacked IPs, and unassigned or unused IP space/addresses. These devices, while needed, create a more work than a security team could possible handle. False security attack positives consume an inordinate amount of time and resources: enterprises spend approximately 21,000 hours per year on average dealing with false positive cyber security alerts per a Ponemon Institute report published January 2015. You need to reduce the attack surface in order to only focus on the traffic that needs to be inspected.

“ThreatARMOR delivers a new level of visibility and security by blocking unwanted traffic before many of these unnecessary security events are ever generated. And its protection is always up to date thanks to our Application and Threat Intelligence (ATI) program.” said Dennis Cox, Chief Product Officer at Ixia.

“The ATI program develops the threat intelligence for ThreatARMOR and a detailed ‘Rap Sheet’ that provides proof of malicious activity for all blocked IP addresses, supported with on-screen evidence of the activity such as malware distribution or phishing, including date of the most recent confirmation and screen shots.”

ThreatARMOR: your new front line of defense!

Additional Resources:

ThreatARMOR

Thanks to Ixia for the article.

Don’t Be Lulled to Sleep with a Security Fable. . .

Don’t Be Lulled to Sleep with a Security Fable. . .Once upon a time, all you needed was a firewall to call yourself “secure.” But then, things changed. More networks are created every day, every network is visible to the others, and they connect with each other all the time—no matter how far away or how unrelated.

And malicious threats have taken notice . . .

As the Internet got bigger, anonymity got smaller. It’s impossible to go “unnoticed” on the Internet now. Everybody is a target.

Into today’s network landscape, every network is under the threat of attack all the time. In reaction to threats, the network “security perimeter” has expanded in reaction to new attacks, new breeds of hackers, more regions coming online, and emerging regulations.

Security innovation tracks threat innovation by creating more protection—but this comes with more complexity, more maintenance, and more to manage. Security investment rises with expanding requirements. Just a firewall doesn’t nearly cut it anymore.

Next-generation firewalls, IPS/IDS, antivirus software, SIEM, sandboxing, DPI: all of these tools have become part of the security perimeter in an effort to stop traffic from getting in (and out) of your network. And they are overloaded, and overloading your security teams.

In 2014, there were close to 42.8 million cyberattacks (roughly 117,339 attacks each day) in the United States alone. These days, the average North American enterprise fields around 10,000 alerts each day from its security systems—way more than their IT teams can possibly process—a Damballa analysis of traffic found.

Your network’s current attack surface is huge. It is the sum of every access avenue an attacker could use to enter your network (or take data out of your network). Basically, every connection to and/or from anywhere.

There are two types of traffic that hit every network: The traffic worth analyzing for threats, and the traffic not worth analyzing for threats that should be blocked immediately before any security resource is wasted inspecting or following up on it.

Any way to filter out traffic that is either known to be good or known to be bad, and doesn’t need to go through the security system screening, reduces the load on your security staff. With a reduced attack surface, your security resources can focus on a much tighter band of information, and not get distracted by non-threatening (or obviously threatening) noise.

Thanks to Ixia for the article.

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope


5 Reasons Why You Should Include LAN Switches in Your NCCM Scope1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?
5 Perks of Network Performance Management1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

5 Perks of Network Performance Management

Thanks to NetFlow Auditor for the article.

Cyber Attacks – Businesses Held for Ransom in 2015

Cyber Attacks – Businesses Held for Ransom in 2015Really nice crisp clear morning here in Galway, bit chilly though. Before I dropped my 14 year old son to school, I tuned into an Irish station, NewsTalk and caught most of a very interesting conversation between a member of a large Irish law firm, William Fry and the presenter.

They were discussing the increasing threat of cyber-attack for Irish businesses. They spoke about the importance of detection as 43% of business are not even aware that they are being attacked and the hackers can have access for weeks/months before they are detected.

They also indicated that 4 out of 5 businesses have been impacted, hard to believe but if this also includes recent Ransomware attacks for example, based on feedback from NetFort customers I would believe it. Maybe also as large enterprise are spending more on security and have ‘tightened up’ the hacker has moved on, redefined the ‘low hanging fruit’, it is now the small to medium enterprise (SME)?

It reminds me of a discussion I had last year with a network admin of a college in Chicago. ‘John, we are entering an era where continuous monitoring, visibility is becoming more and more critical because there is no way all the inline active systems can protect us internally and externally these days’.

I am biased but I think he is absolutely correct. Visibility, actionable intelligence, data normal users can read and interpret and act on is critical.

Visibility not just at the edge though, also at the core, the internal network because it is critical to be able to see and detect suspicious activity or network misuse here also. It is also important to track this, to keep a record of it to help troubleshoot, to provide proof for management, auditors and even users.

I was discussing some recent LANGuardian use cases with an adviser in the US this week and mentioned that we are hearing the term ‘network misuse’ a lot more these days and I was not sure why. Maybe organizations are becoming more concerned about data theft?

His explanation makes sense, it was all about the attack surface for him, if users are misusing the network, accessing sites and applications that are non-critical or inappropriate and infected, it is increasing the attack surface, the security risk and will result in pain for everybody.

In defence of Irish business though, a lot of the systems out there in this space are only suitable for large enterprises, too expensive and complex to manage, tune and get real actionable intelligence. The SMEs all over the world, not just Ireland cannot afford them in terms of time, people and money.

Thanks to NetFort for the article.

Benefits of Network Security Forensics

The networks that your business operates on are often open and complex.

Your IT department is responsible for mitigating network risks, managing performance and auditing data to ensure functionality.

Using NetFlow forensics can help your IT team maintain the competitiveness and reliability of the systems required to run your business.
Benefits of Network Security Forensics

In IT, network security forensics involves the monitoring and analysis of your network’s traffic to gather information, obtain legal evidence and detect network intrusions.

These activities help keep your company perform the following actions.

  • Adjust to increased data and NetFlow volumes
  • Identify heightened security vulnerabilities and threats
  • Align with corporate and legislative compliance requirements
  • Contain network costs
  • Analyze network performance demands
  • Recommend budget-friendly implementations and system upgrades

NetFlow forensics helps your company maintain accountability and trace usage; these functions become increasingly difficult as your network becomes more intricate.

The more systems your network relies on, the more difficult this process becomes.

While your company likely has standard security measures in place, e.g. firewalls, intrusion detection systems and sniffers, they lack the capability to record all network activity.

Tracking all your network activity in real-time at granular levels is critical to the success of your organization.

Until recently, the ability to perform this type of network forensics has been limited due to a lack of scalability.

Now, there are web-based solutions that can collect and store this data to assist your IT department with this daunting task.

Solution capabilities include:

  • Record NetFlow data at a micro level
  • Discover security breaches and alert system administrators in real-time
  • Identify trends and establish performance baselines
  • React to irregular traffic movements and applications
  • Better provisioning of network services

The ability to capture all of this activity will empower your IT department to provide more thorough analysis and take faster action to resolve system issues.

But, before your company can realize the full value of NetFlow forensics, your team needs to have a clear understanding of how to use this intelligence to take full advantage of these detailed investigative activities.

Gathering the data through automation is a relatively simple process once the required automation tools have been implemented.

Understanding how to organize these massive amounts of data into clear, concise and actionable findings is an additional skill set that must be developed within your IT team.

Having a team member, whether internal or via a third-party vendor, that can aggregate your findings and create visual representations that can be understood by non-technical team members is a necessary part of NetFlow forensics. It is important to stress the necessity of visualization; this technique makes it much easier to articulate the importance of findings.

In order to accurately and succinctly visualize security issues, your IT staff must have a deep understanding of the standard protocols of your network. Without this level of understanding, the ability to analyze and investigate security issues is limited, if not impossible.

Utilizing a software to support the audit functions required to perform NetFlow forensics will help your company support the IT staff in the gathering and tracking of these standard protocols.

Being able to identify, track and monitor the protocols in an automated manner will enhance your staff’s ability to understand and assess the impact of these protocols on network performance and security. It will also allow you to quickly assess the impact of changes driven by real-time monitoring of your network processes.

Sound like a daunting task?

It doesn’t have to be. Choose a partner to support your efforts and help you build the right NetFlow forensics configuration to support your business.

Contact us today and let us help you manage your company’s IT network.

Benefits of Network Security Forensics
Thanks to NetFlow Auditor for the article.