How Can We Monitor Traffic Associated with Remote Sites?

How Can we Monitor Traffic Associated with Remote Sites?Many IT teams are now tasked with managing remote sites without having the luxury of local IT support. Business owners expect everything to be done remotely, we do live in the connected age, don’t we? Is it possible to see what is happening in these networks without the need for installing client or agent software everywhere?

You can gather some network performance information using SNMP or WMI but you will be limited to alerts or high level information. What you need is some form of deeper traffic analysis. Software applications that do traffic analysis are ideal for troubleshooting LAN and link problems associated with remote sites.

There are two main technologies available to analyze network traffic associated with remote sites, those that do flow analysis and those that capture network packets. Flow statistics are typically available from devices that can route data between two networks, most Cisco routers support NetFlow for example. If your remote networks are flat (single subnet) or you don’t have flow options on your network switches then packet capture is a viable option.

You can implement packet capture by connecting a traffic analysis system to a SPAN or mirror port on a network switch at your remote site. You can then log onto your traffic analysis system remotely to see what is happening within these networks.

How Can we Monitor Traffic Associated with Remote Sites?

NetFort LANGuardian has multiple means of capturing data associated with remote sites. The most popular option is to install an instance of the LANGuardian software at your HQ. Sensors can be deployed on physical or virtual platforms at important remote sites. Data from these is stored centrally to you get a single reference point for all traffic and security information across local and remote networks.

LANGuardian can also capture flow based statistics such as NetFlow, IPFix and SFlow, routers/switches on the remote sites can be configured to send flow traffic to LANGuardian. Watch out for issues associated with NetFlow as it has limitations when it comes to monitoring cloud computing applications.

Download White Paper

How to monitor WAN connections with NetFort LANGuardian

Download this whitepaper which explains in detail how you can monitor WAN connections with NetFort LANGuardian

How Can we Monitor Traffic Associated with Remote Sites?

How To Find Bandwidth Hogs

Thanks to NetFort for the article.

Webinar- Best Practices for NCCM

Webinar- Best Practices for NCCM

Most networks today have a “traditional” IT monitoring solution in place which provides alarming for devices, servers and applications. But as the network evolves, so does the complexity and security risks and it now makes sense to formalize the process, procedures, and policies that govern access and changes to these devices. Vulnerability and lifecycle management also plays an important role in maintaining the security and integrity of network infrastructure.

Network Configuration and Change Management – NCCM is the “third leg” of IT management with traditional Performance and Fault Management (PM and FM) being one and two. The focus of NCCM is to ensure that as the network grows, there are policies and procedures in place to ensure proper governance and eliminate preventable outages.

Eliminating misapplied configurations can reduce network performance and security issues from 90% to 10%.

Learn about the best practices for Network Configuration and Change Management to both protect and report on your critical network device configurations

  1. Enabling of Real-Time Configuration Change Detection
  2. Service Design Rules Policy
  3. Auto-Discovery Configuration Backup
  4. Regulatory Compliance Policy
  5. Vendor Default and Security Access Policies
  6. Vulnerability Optimization and Lifecycle Announcements

Date: On October 28Th.
Time: 2:00pm Eastern

Webinar- Best Practices for NCCM

Register for webinar NOW: http://hubs.ly/H01gB720

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?
5 Perks of Network Performance Management1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

5 Perks of Network Performance Management

Thanks to NetFlow Auditor for the article.

The State of Enterprise Security Resilience – An Ixia Research Report

Ixia, an international leader in application performance and security resilience technology, conducted a survey to better understand how network security resilience solutions and techniques are used within the modern enterprise. While information exists on security products and threats, very little is available on how it is actually being used and the techniques and technology to ensure that security is completely integrated into the corporate network structure. This report presents the research we uncovered.

During this survey, there were three areas of emphasis exploring security and visibility architectures. One portion of the survey focused on understanding the product types and use. The second area of emphasis was on understanding the processes in use. The final area of emphasis was on understanding the people components of typical architectures.

This report features several key findings that include the following:

  • Many enterprises and carriers are still highly vulnerable to the effects of a security breach. This is due to concerns with lack of following best practices, process issues, lack of awareness, and lack of proper technology.
  • Lack of knowledge, not cost, is the primary barrier to security improvements. However, typical annual spend on network security is less than $100K worldwide.
  • Security resilience approaches are growing in worldwide adoption. A primary contributor is the merge of visibility and security architectures. Additional data shows that life-cycle security methodologies and security resilience testing are also positive contributors.
  • The top two main security concerns for IT are data loss and malware attacks.

These four key findings confirm that while there are still clear dangers to network security in the enterprise, there is some hope for improvement. The severity of the risk has not gone away, but it appears that some are managing it with the right combination of investment in technology, training, and processes.

To read more, download the report here.

The State of Enterprise Security Resilience - An Ixia Research Report

Thanks to Ixia for the article.

CIO Review – Infosim Unified Solution for Automated Network Management

CIO Review

20 Most Promising Networking Solution Providers

Virtualization has become the life blood of the networking industry today. With the advent of technologies such as software-defined networking and network function virtualization, the black box paradigm or the legacy networking model has been shattered. In the past, the industry witnessed networking technology such as Fiber Distributed Data Interface (FDDI), which eventually gave way to Ethernet, the predominant network of choice. This provided opportunities to refresh infrastructures and create new networking paradigms.Today, we see a myriad of proprietary technologies, competing for the next generation networking models that are no longer static, opaque or rigid.

Ushering a new way of thinking and unlocking the possibilities, customers are increasingly demanding for automation from the network solution providers. The key requirement is an agile network controlled from a single source. Visibility into the network has also become a must-have in the networking spectrum, providing realtime information about the events befalling inside the networks.

In order to enhance enterprise agility, improve network efficiency and maintain high standards of security, several innovative players in the industry are delivering cutting-edge solutions that ensure visibility, cost savings and automation in the networks. In the last few months we have looked at hundreds of solution providers who primarily serve the networking industry, and shortlisted the ones that are at the forefront of tackling challenges faced by this industry.

In our selection, we looked at the vendor’s capability to fulfill the burning needs of the sector through the supply of a variety of cost effective and flexible solutions that add value to the networking industry. We present to you CIO Review’s 20 Most Promising Networking Solution Providers 2015.

Infosim Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to Infosim for the article.

External Availability Monitoring – Why it Matters

External Availability Monitoring - Why it MattersRemember the “good old days” when everyone that worked got in their car and drove to a big office building every day? And any application that a user needed was housed completely within the walls of the corporate datacenter? And partners / customers had to dial a phone to get a price or place an order? Well, if you are as old as I am, you may remember those days – but for the vast majority you reading this, you may think of what I just described as being about as common as a black and white TV.

The simple fact is that as the availability and ubiquity of the Internet has transformed the lives of people, it has equally (if not more dramatically) transformed IT departments.In some way this has been an incredible boon, for example, I can now download and install new software in a fraction of the time it used to take to purchase and receive that same software on CD’s (look it up kids).

Users can now login to almost any critical business application from anywhere there is a Wi-Fi connection. They can probably perform their job function to nearly 100% from their phone….in a Starbucks…. or on an airplane…..But of course, with all of the good, comes (some) of the bad – or at least difficult challenges for the IT staff whose job it is to keep all of those applications available to everyone , everywhere, all of the time. The (relatively) simple “rules” for IT monitoring need to be re-thought and extended for the modern work place. This is where External Availability Monitoring comes in.

We define External Availability Monitoring (EAM) as the process through which your critical network services and the applications that run over them are continuously tested from multiple test points which simulate real world geo-diversity and connectivity options. Simply put, you need to constantly monitor the availability and performance of any public facing services. This could be your corporate website, VPN termination servers, public cloud based applications and more.

This type of testing matters, because the most likely cause of service issues today is not call from Bob on the 3rd floor, but rather Jane who is currently in a hotel in South America and is having trouble downloading the latest presentation from the corporate intranet which she needs to deliver tomorrow morning.

Without a proactive approach to continuous service monitoring, you are flying blind as to issues that impact the global availability – and therefore operations- of your business.

So, how is this type of monitoring delivered? We think the best approach is to setup multiple types of tests such as:

  • ICMP Availability
  • TCP Connects
  • DNS Tests
  • URL Downloads
  • Multimedia (VoIP and Video) tests (from external agent to internal agent)
  • Customized application tests

These tests should be performed from multiple global locations (especially from anywhere your users commonly travel). This could even include work from home locations. At a base level, even a few test points can alert you quickly to availability issues.

More test points can increase the accuracy with which you can pinpoint some problems. It may be that the incident seems to be isolated to users in the Midwest or is only being seen on apps that reside on a particular cloud provider. The more diverse data you collect, the swifter and more intelligent your response can be.

Alerts should be enabled so that you can be notified immediately if there is an issue with application degradation, or “service down” situation. The last piece to the puzzle is to quickly be able to correlate these issues with underlying internal network or external service provider problems.

We see this trend of an “any application, anywhere, anytime” service model becoming the standard for IT departments large and small. With this shift comes an even greater need for continuous & proactive External Availability Monitoring.

External Availability Monitoring - Why it Matters

Thanks to NMSaaS for the article.

The First 3 Steps To Take When Your Network Goes Down

The First 3 Steps To Take When Your Network Goes DownWhether it is the middle of the day, or the middle of the night nobody who is in charge of a network wants to get “that call”. There is a major problem and the network is down. It usually starts with one or two complaints “hey, I can’t open my email” or “something is wrong with my web browser” but those few complaints suddenly turn into many and you suddenly you know there is a real problem. What you may not know, is what to do next.

In this blog post, I will examine some basic troubleshooting steps that every network manager should take when investigating an issue. Whether you have a staff of 2 or 200, these common sense steps still apply. Of course, depending on what you discover as you perform your investigation, you may need to take some additional steps to fully determine the root cause of the problem and how to fix it.

Step 1. Determine the extent of the problem.

You will need to try and pinpoint as quickly as possible the scope of the issue. Is it related to a single physical location like just one office, or is it network wide including WAN’s and remote users. This can provide valuable insight into where to go next. If the problem is contained within a single location, then you can be pretty sure that the cause of the issue is also within that location (or at the very least that location plus any uplink connections to other locations).

It may not seem intuitive but if the issue is network wide with multiple affected locations, then sometimes this can really narrow down the problem. It probably resides in the “core” of your network because this is usually the only place that can have an issue which affects such a large portion of your network. That may not make it easier to fix, but it generally does help with identification.

If you’re lucky you might even be able to narrow this issue down even further into a clear segment like “only wireless users” or “everything on VLAN 100” etc. In this case, you need to jump straight into deep dive troubleshooting on just those areas.

Step 2. Try to determine if it is server/application related or network related.

This starts with the common “ping test”. The big question you need to answer is, do my users have connectivity to the servers they are trying to access, but (for some reason) cannot access the applications (this means the problem is in the servers / apps) or do they not have any connectivity at all (which means a network issue).

This simple step can go a long way towards troubleshooting the issue. If there is no network connectivity, then the issue will reside in the infrastructure. Most commonly in L2/L3 devices and firewalls. I’ve seen many cases where the application of a single firewall rule is the cause if an entire network outage.

If there is connectivity, then you need to investigate the servers and applications themselves. Common network management platforms should be able to inform you of server availability including tests for service port availability, the status of services and processes etc. A widespread issue that happens all at once is usually indicative of a problem stemming from a patch or other update / install that was performed on multiple systems simultaneously.

Step 3. Use your network management system to pinpoint, rollback, and/or restart.

Good management systems today should be able to identify when the problem first occurred and potentially even the root cause of the issue (especially for network issues). You also should have backup / restore capabilities for all systems. That way, in a complete failure scenario, you can always fall back to a known good configuration or state. Lastly, you should be able to then restart your services or devices and restore service.

In some cases there may have been a hardware failure that needs to be addressed first before a device can come back online. Having spare parts or emergency maintenance contracts will certainly help in that case. If the issue is more complex like overloading of a circuit or system, then steps may need to be put in place to restrict usage until additional capacity can be added. With most datacenters running on virtualized platforms today, in many cases additional capacity for compute, and storage can be added in less than 60 minutes.

Network issues happen to every organization. Those that know how to effectively respond and take a step by step approach to troubleshooting will be able to restore service quickly.

I hope these three steps to take when your Network goes down was usefull, dont forget to subscribe for our weekly blogs.

The First 3 Steps To Take When Your Network Goes Down

Thanks to NMSaaS for the article.

Ixia Exposes Hidden Threats in Encrypted Mission-Critical Enterprise Applications

Delivers industry’s first visibility solution that includes stateful SSL decryption to improve application performance and security forensics

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced it has extended its Application and Threat Intelligence (ATI) Processor™ to include stateful, bi-directional SSL decryption capability for application monitoring and security analytics tools. Stateful SSL decryption provides complete session information to better understand the transaction as opposed to stateless decryption that only provides the data packets. As the sole visibility company providing stateful SSL decryption for these tools, Ixia’s Visibility Architecture™ solution is more critical than ever for enterprise organizations looking to improve their application performance and security forensics.

“Together, FireEye and Ixia offer a powerful solution that provides stateful SSL inspection capabilities to help protect and secure our customer’s networks,” said Ed Barry, Vice President of Cyber Security Coalition for FireEye.

As malware and other indicators of compromise are increasingly hidden by SSL, decryption of SSL traffic for monitoring and security purposes is now more important for enterprises. According to Gartner research, for most organizations, SSL traffic is already a significant portion of all outbound Web traffic and is increasing. It represents on average 15 percent to 25 percent of total Web traffic, with strong variations based on the vertical market.1 Additionally, compliance regulations such as the PCI-DSS and HIPAA increasingly require businesses to encrypt all sensitive data in transit. Finally, business applications like Microsoft Exchange, Salesforce.com and Dropbox run over SSL, making application monitoring and security analytics much more difficult for IT organizations.

Enabling visibility without borders – a view into SSL

In June, Ixia enabled seamless visibility across physical, virtual and hybrid cloud data centers. Ixia’s suite of virtual visibility products allows insight into east-west traffic running across the modern data center. The newest update, which includes stateful SSL decryption, extends security teams’ ability to look into encrypted applications revealing anomalies and intrusions.

Visibility for better performance – improve what you can measure

While it may enhance security of transferred data, encryption also limits network teams’ ability to inspect, tune and optimize the performance of applications. Ixia eliminates this blind spot by providing enterprises with full visibility into mission critical applications.

The ATI Processor works with Ixia’s Net Tool Optimizer® (NTO™) solution and brings a new level of intelligence to network packet brokers. It is supported by the Ixia Application & Threat Intelligence research team, which provides fast and accurate updates to application and threat signatures and application identification code. Additionally, the new capabilities will be available to all customers with an ATI Processor and an active subscription.

To learn more about Ixia’s latest innovations read:

ATI processor

Encryption – The Next Big Security Threat

Thanks to Ixia for the article.

Why Just Backing Up Your Router Config is the Wrong Thing To Do

One of the most fundamental best practices of any IT organization is to have a backup strategy and system in place for critical systems and devices. This is clearly needed for any disaster recovery situation and most IT departments have definitive plans and even practiced methodologies set in place for such an occurrence.

Why Just Backing Up Your Router Config is the Wrong Thing To DoHowever what many IT pros don’t always consider is how useful it is to have backups for reasons other than DR and the fact that for most network devices (and especially routers), it is not just the running configuration that should be saved. In fact, there are potentially hundreds of smaller pieces of information that when properly backed up can be used for help with ongoing operational issues.

First, let’s take a look at the traditional device backup landscape, and then let’s explore how this structure should be enhanced to provide additional services and benefits.

Unlike server hard drives, network devices like routers do not usually fall within the umbrella backup systems used for mass data storage. In most cases a specialized system must be put in place for these devices. Each network vendor has special commands that must be used in order to access the device and request / download the configurations.

When looking at these systems it is important to find out where the resulting configurations will be stored. If the system is simply storing the data into an on-site appliance, then it also critical to determine if that appliance itself is being backup into an offsite / recoverable system otherwise the backup are not useful in a DR situation where the backup appliance may also be offline.

It is also important to understand how many backups your system can hold i.e. can you only store the last 10 backups, or maybe only everything in the last 30 days etc. are these configurable options that you can adjust based on your retention requirements? This can be a critical component for audit reporting, as well as when rollback is needed to a previous state (that may not just have been the last state).

Lastly, does the system offer a change report showing what differences exist between selected configurations? Can you see who made the changes and when?

In addition to the “must haves” explored above, I also think there are some advanced features that really can dramatically improve the operational value of a device / router backup system. Let’s look at these below:

  • Routers and other devices are more than just their config files. Very often they can provide output which describes additional aspects of their operation. To use the common (cisco centric) terminology, you can also get and store the output of a “show” command. This may contain critical information about the devices hardware, software, services, neighbors and more that could not be seen from just the configuration. It can be hugely beneficial to store this output as well as it can be used to help understand how the device is being used, what other devices are connected to it and more.
  • Any device in a network, especially a core component such as a router should conform to company specific policies for things like access, security etc. Both the main configuration file, as well as the output from the special “show” commands can be used to check the device against any compliance policy your organization has in place.
  • All backups need to run both on a schedule (we generally see 1x per day as the most common schedule) as well as on an ad-hoc basis when a change is made. This second option is vital to maintaining an up to date backup system. Most changes to devices happen at some point during the normal work day. It is critical that your backup system can be notified (usually via log message) that a change was made and then immediately launch a backup of the device – and potentially a policy compliance check as well.

Long gone are the days where simply logging into a router, getting the running configuration, and storing that in a text file is considered a “backup plan”. Critical network devices need to have the same attention paid them as servers and other IT systems. Now is a good time to revisit your router backup systems and strategies and determine if you are implementing a modern backup approach, as you can see its not just about backing up your router config.

Why Just Backing Up Your Router Config is the Wrong Thing To DoThanks to NMSaaS for the article.

 

Encryption: The Next Big Security Threat

As is common in the high-tech industry, fixing one problem often creates another. The example I’m looking at today is network data encryption. Encryption capability, like secure sockets layer (SSL), was devised to protect data packets from being read or corrupted by non-authorized users. It’s used on both internal and external communications between servers, as well as server to clients. Many companies (e.g. Google, Yahoo, WebEx, Exchange, SharePoint, Wikipedia, E*TRADE, Fidelity, etc.) have turned this on by default over the last couple of years.

Unfortunately, encryption is predicted to become the preferred choice of hackers who are creating malware and then using encrypted communications to propagate and update the malware. One current example is the Zeus botnet, which uses SSL communications to upgrade itself. Gartner Research stated in their report “Security Leaders Must Address Threats From Rising SSL Traffic” that by 2017, 50% of malware threats will come from using SSL encrypted traffic. This will create a serious blind spot for enterprises. Gartner also went on to state that less than 20% of firewalls, UTM, and IPS deployments support decryption. Both of these statistics should be alarming to anyone involved in network security.

And it’s not just Zeus you need to look out for. There are several types of growing encrypted malware threats. The Gartner report went on to mention two more instances (one being a Boston Marathon newsflash) of encryption being misused by malware threats. Other examples exist as well: the Gameover Trojan, Dyre, and a new Upatre variant just found in April.

Another key point to understand is that “just turning on encryption” isn’t a simple, low-cost fix, especially when using 2048-bit RSA keys that have been mandated since January 1, 2014. NSS labs ran a study and found that the decryption capability in typical firewalls reduced the throughput of the firewalls by up to 74%. The study also found an average performance loss of 81% across all eight vendors that they evaluated. Turning on encryption/decryption capabilities will cost you—both in performance and in higher network costs.

And, it gets worse! Firewalls, IPS’ and other devices are usually only deployed at the edge of enterprise networks. The internal network communications between server to sever and server to client often go unexamined within many enterprises. These internal communications can be up to 80% of your encrypted traffic. Once the malware gets into your network, it uses SSL to camouflage its activities. You’ll never know about it—data can be exfiltrated, virus’ and worms can be released, or malicious code can be installed. This is why you need to look at internally encrypted traffic as well. Constant vigilance is now the order of the day.

One way to implement constant vigilance is for IT teams is to spot check their network data to see if there are hidden threats. Network packet brokers (NPBs) that support application intelligence with SSL decryption are a good solution. Application intelligence is the ability to monitor packets based on application type and usage. It can be used to decrypt network packets and dynamically identify the applications running (along with malware) on a network. And since the decryption is performed on out-of-band monitoring data, there is no performance impact.

An easy answer to gain visibility, especially for internally encrypted traffic, is to deploy the Ixia ATI Processor. The Ixia ATI Processor uses bi-directional, stateful decryption capability, and allows you to look at both encrypted internal and external communications. Once the monitoring data is decrypted, application filtering can be applied and the information can be sent to dedicated, purpose-built monitoring tools (like an IPS, IDS, SIEMs, network analyzers, etc.).

Ixia’s Application and Threat Intelligence (ATI) Processor, built for the NTO 7300 and also the NTO 6212 standalone model, brings intelligent functionality to the network packet broker landscape with its patent pending technology that dynamically identifies all applications running on a network. This product gives IT organizations the insights needed to ensure the network works—every time and everywhere. This visibility product extends past Layer 4 through to Layer 7 and provides rich data regarding the behavior and locations of users and applications in the network.

As new network security threats emerge, the ATI Processor helps IT improve their overall security with better intelligence for their existing security tools. The ATI Processor correlates applications with geography and can identify compromised devices and malicious activities such as Command and Control (CNC) communications from malicious botnet activities. IT organizations can now dynamically identify unknown applications, identify security threats from suspicious applications and locations, and even audit for security policy infractions, including the use of prohibited applications on the network or devices.

To learn more, please visit the ATI Processor product page or contact us to see a demo!

Additional Resources:

Application and Threat Intelligence (ATI) Processor

NTO 7300

NTO 6212

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.