The State of Enterprise Security Resilience – An Ixia Research Report

Ixia, an international leader in application performance and security resilience technology, conducted a survey to better understand how network security resilience solutions and techniques are used within the modern enterprise. While information exists on security products and threats, very little is available on how it is actually being used and the techniques and technology to ensure that security is completely integrated into the corporate network structure. This report presents the research we uncovered.

During this survey, there were three areas of emphasis exploring security and visibility architectures. One portion of the survey focused on understanding the product types and use. The second area of emphasis was on understanding the processes in use. The final area of emphasis was on understanding the people components of typical architectures.

This report features several key findings that include the following:

  • Many enterprises and carriers are still highly vulnerable to the effects of a security breach. This is due to concerns with lack of following best practices, process issues, lack of awareness, and lack of proper technology.
  • Lack of knowledge, not cost, is the primary barrier to security improvements. However, typical annual spend on network security is less than $100K worldwide.
  • Security resilience approaches are growing in worldwide adoption. A primary contributor is the merge of visibility and security architectures. Additional data shows that life-cycle security methodologies and security resilience testing are also positive contributors.
  • The top two main security concerns for IT are data loss and malware attacks.

These four key findings confirm that while there are still clear dangers to network security in the enterprise, there is some hope for improvement. The severity of the risk has not gone away, but it appears that some are managing it with the right combination of investment in technology, training, and processes.

To read more, download the report here.

The State of Enterprise Security Resilience - An Ixia Research Report

Thanks to Ixia for the article.

5 Reasons Why You Must Back Up Your Routers and Switches

I’ve been working in the Network Management business for over 20 years, and in that time I have certainly seen my share of networks. Big and small, centralized and distributed, brand name vendor devices in shiny datacenters, and no-name brands in basements and bunkers. The one consistent surprise I continue to encounter is how many of these organization (even the shiny datacenter ones) lack a backup system for their network device configurations.

I find that a little amazing, since I also can’t tell you the last time I talked to a company that didn’t have a backup system for their servers and storage systems. I mean, who doesn’t backup their critical data? It seems so obvious that hard drive need to be backed up in case of problems –and yet many of these same organizations, many of whom spend huge amounts of money on server backup, do not even think of backing up the critical configurations of the devices that actually move the traffic around.

So, with that in mind, I present 5 reasons why you must back up your Routers and Switches (and Firewalls and WLAN controllers, and Load Balancers etc).

1. Upgrades and new rollouts.

Network Devices get swapped out all of the time. In many cases, these rollouts are planned and scheduled. At some point (if you’re lucky) an engineer will think about backing up the configuration of the old device before the replacements occurs. However, I have seen more than one time when this didn’t happen. In those cases, the old device is gone, and the new devices need to be reconfigured from scratch – hopefully with all of the correct configs. A scheduled backup solution makes these situations a lot less painful.

5 Reasons Why You Must Back Up Your Routers and Switches2. Disaster Recovery.

This is the opposite of the simple upgrade scenario. The truth is that many times a device is not replaced until it fails. Especially those “forgotten” devices that are on the edge of networks in ceilings and basements and far flung places. These systems rarely get much “love” until there is a problem. Then, suddenly, there is an outage – and in the scramble to get back up and running, and central repository of the device configuration can be a time (and life) saver.

3. Compliance

We certainly see this more in larger organizations, but it also becomes a real driving force in smaller companies that operate in highly regulated industries like banking and healthcare. If your company falls into one of those categories, then chances are you actually have a duty to backup your devices in order to stay within regulatory compliance. The downside of being non-compliant can be harsh. We have worked with companies that were being financially penalized for every day they were out of compliance with a number of policies including failure to have a simple router / switch / firewall backup system in place.

4. Quick Restores.

Ask most network engineers and they will tell you – we’ve all had that “oops” moment when we were making an configuration change on the fly and realized just a second after hitting “enter” that we just broke something. Hopefully, we just took down a single device. Sometimes it’s worse than that and we go into real panic mode. I can tell you, it is that exact moment when we realize how important configuration backups can be. The restoration process can be simple and (relatively) painless, or it can be really, really painful; and it all comes down to whether or not you have good backups.

5. Policy Checking.

One of the often overlooked benefits of backing up your device configurations, is that it allows an NCCM systems to then automatically scan those backups and compare them to known good configurations in order to ensure compliance to company standards. Normally, this is a very tedious (and therefore ignored) process – especially in large organizations with many devices and many changes taking place. Sophisticated systems can quickly identify when a device configuration has changed, immediately backup the new config, and then scan that config to make sure it’s not violating any company rules. Regular scans can be rolled up into scheduled reports which provide management with a simple but important audit of all devices that are out of compliance.

Bottom Line:

Routers, Switches and Firewalls really are the heart of a network. Unless they are running smoothly, everything suffers. One of the simplest yet effective practices for helping ensure the operation of a network is to implement an automatic device configuration backup system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article. 

NetFlow Auditor – Best Practices for Network Traffic Analysis Webinar

Subscribe to blog  Subscribe via RSS Telnet Network News Home Categories  Search NetFlow Auditor - Best Practices for Network Traffic Analysis Webinar

Please join John Olson, NetFlow Auditor, Thursday, October 1st at 2pm EDT for the latest NetFlow Auditor webinar broadcast.

In this Webinar, we will discuss our Top 10 Best Practices for using Netflow data to analyze, report and alert on multiple aspects of Network traffic.

These best practices include:

  • Understanding where is the best place in your network to collect netflow data from
  • How long you should retain your flow data
  • Why you should archive more than just the top 100 flows
  • When static thresholds should be replaced by adaptive thresholds
  • More…

Subscribe to blog  Subscribe via RSS Telnet Network News Home Categories  Search NetFlow Auditor - Best Practices for Network Traffic Analysis Webinar

 

Thanks to NetFlow Auditor for the article.

CIO Review – Infosim Unified Solution for Automated Network Management

CIO Review

20 Most Promising Networking Solution Providers

Virtualization has become the life blood of the networking industry today. With the advent of technologies such as software-defined networking and network function virtualization, the black box paradigm or the legacy networking model has been shattered. In the past, the industry witnessed networking technology such as Fiber Distributed Data Interface (FDDI), which eventually gave way to Ethernet, the predominant network of choice. This provided opportunities to refresh infrastructures and create new networking paradigms.Today, we see a myriad of proprietary technologies, competing for the next generation networking models that are no longer static, opaque or rigid.

Ushering a new way of thinking and unlocking the possibilities, customers are increasingly demanding for automation from the network solution providers. The key requirement is an agile network controlled from a single source. Visibility into the network has also become a must-have in the networking spectrum, providing realtime information about the events befalling inside the networks.

In order to enhance enterprise agility, improve network efficiency and maintain high standards of security, several innovative players in the industry are delivering cutting-edge solutions that ensure visibility, cost savings and automation in the networks. In the last few months we have looked at hundreds of solution providers who primarily serve the networking industry, and shortlisted the ones that are at the forefront of tackling challenges faced by this industry.

In our selection, we looked at the vendor’s capability to fulfill the burning needs of the sector through the supply of a variety of cost effective and flexible solutions that add value to the networking industry. We present to you CIO Review’s 20 Most Promising Networking Solution Providers 2015.

Infosim Unified Solution for Automated Network Management

Today’s Networking technology though very advanced, faces a major roadblock—the lack of automation in the network management products. “These products are incapable of delivering a truly unified management approach as they are not an integrated solution but merely a collection of different programs bound together under one GUI to give them the appearance of an integrated solution,” notes Jim Duster, CEO, Infosim. Moreover, the need for continuously updating new device information, changes in configurations, and alerts and actions across these different toolsets are contributing to an ongoing financial burden for enterprises. Addressing these concerns with a unique network management solution is Infosim, a manufacturer of Automated Service Fulfillment and Service Assurance solutions.

Infosim offers StableNet, a unified solution developed and designed to cover performance management, fault management, and configuration management with a software that is engineered with a single code base and a consistent data model underneath. “StableNet is the only “suite” within the network performance management software industry,” claims Duster. The solution addresses the existing operational and technical challenges of managing distributed, virtualized, and mission critical IT infrastructures. “With this approach, we are able to create work flows in every unique customer business and industry to cover many processes efficiently,” he adds. For instance, StableNet monitors the production equipment of a manufacturing company. In case of an equipment failure, the error is being reported and StableNet delivers the root cause of the problem, while notifying an external service provider. The service provider’s technician can open an inspection window with StableNet, exchange the defective device and after re air, can provide feedback to the customer’s operations center.

For supporting the flexible deployment of StableNet, the company offers Infosim StableNet appliance, a high performance, preconfigured, security-hardened, hardware platform. “Appliances related to StableNet series reduce Total Cost of Ownership (TCO) by simplifying deployment, consolidating network infrastructure, and providing an extensible platform that can scale with your organization,” states Duster. StableNet also provides a low cost agent platform called the StableNet Embedded Agent (SNEA)—that enables highly distributed installations to support End-to-End (E2E) Visibility, Cloud Monitoring and Internet of Things. The deployment of SNEA is economical and is auto discovered at tactical collection points in networks, thus resulting into a low TCO for collecting and processing network performance actions and alerts.

Infosim StableNet is deployed across the networks of major players in the Telco and Enterprise markets including that of a German auto manufacturer. Acting as the client’s centralized system, StableNet reduced their toolset from over 10 disparate software and hardware offerings from multiple suppliers to less than four. This significantly reduced TCO while increasing service levels. “Siloed IT personnel who used to hide behind non-consolidated results from their individual solutions were all synchronized into one solution, speeding productivity, collaboration and communication,” states Duster.

Infosim is currently participating in advanced research projects on Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) with several universities and leading industry device manufacturers. “The company applies 50 percent of its new software development resource to customer needs which assists customers in achieving milestones in vendor agnostic device support additions, industry specific capabilities, and features that were envisioned by real users,” asserts Duster.

For the years ahead, Infosim plans to build upon its product capability by automating the processes and activities that produce guaranteed service levels and reduce the consumption of human resources in the Network Operations Center (NOC). “Our vision is to enable the Dark NOC, which means a large percent of the non-value adding activities of network engineers can be automated in our product, freeing network engineers to work on proactive new innovations and concepts,” Duster concludes.

Thanks to Infosim for the article.

Sapling’s Master Clock – Leader of the Pack – Part 2

Sapling’s Master Clock – Leader of the Pack – Part 2In continuing with our post explaining how Sapling’s master clocks can receive time, the second option a user has to receive time is through a GPS Receiver.

Receiving time from a GPS satellite is not only extremely accurate, it is also very secure. With the GPS Receiver, the facility does not have to go outside the facility’s established firewall and use a time source via the Internet.

A GPS receiver is built-in to the master clock and sends out the master clock’s exact location to satellites around the world. From the satellite, the master clock receives the accurate UTC (Coordinated Universal Time), then corrects to the local time based on the user’s location.

The GPS Receiver option can be used in conjunction with the NTP Server option, which you can learn about here. A user has the ability to choose which option will be the main time source and which will be the backup time source. By utilizing both options, a user will have redundancy within their system, ensuring accuracy. Receiving time via GPS is optional with a Sapling’s master clock.

Stay tuned next week for the third way Sapling’s master clocks can receive time! If you are interested in learning more about our master clocks in the meantime, visit our website for more info or check out our YouTube video explaining our master clocks more in depth.

Thanks to Sapling for the article.

NetFort 12.4 – Network Traffic and Security Monitoring

NetFort 12.4 – Network Traffic and Security Monitoring

New Version of NetFort LANGuardian Provides Customers with a Single Point of Reference for Network Traffic and Security Monitoring.

NetFort, a leading provider of network traffic and security monitoring (NTSM) solutions, today unveiled version 12.4 of the LANGuardian application. The new version ensures network teams today have the visibility required to collaborate and work with their security colleagues and manage the daily security issues prevalent in today’s world.

Version 12.4 includes a number of significant changes:

  • SMTP Email Decoder Enhancements
  • HTTPS Website Use Reporting
  • Updated BitTorrent Decoder
  • Snort 2.9
  • SYSLOG Forwarding Feature
  • SMTP Email Decoder Enhancements

SMTP Email Decoder Enhancements

The SMTP decoder is a great feature from a network security monitoring point of view. It is a powerful tool if you want to monitor email for phishing type network attacks. Malicious attachments have made a comeback as top attack vector. An interesting post on this here.The SMTP decoder has been upgraded to record the following information

  • Attachments to SMTP emails, including attachment name, MIME type and description. A sample report is shown below, some information is blurred as it came from a live network.
  • Embedded hyper Link detection in emails. This is a beta release for evaluation. Where an SMTP email contains a hyper link, but the link target doesn’t seem to match the description, LANGuardian will log the link target and the description.

NetFort 12.4 – Network Traffic and Security Monitoring

HTTPS Website Use Reporting

The Website monitoring module has been upgraded to now report on HTTPS domains. Domain information (such as https://facebook.com) and traffic volumes are recorded. As packet payloads are encrypted, Individual URIs cannot be reported.

NetFort 12.4 – Network Traffic and Security Monitoring

Updated BitTorrent Decoder

BitTorrent continues to be a popular protocol for downloading and uploading media from the Internet. LANGuardian has the ability to detect BitTorrent use and record metadata such as Infohash values and IP addresses. In 12.4 the BitTorrent decoder has been upgraded to record Peer Exchange messages (PEX). This increases the detection rate for BitTorrent activity and will record media titles, if included in the PEX message.

NetFort 12.4 – Network Traffic and Security Monitoring

Snort 2.9

Snort is a network-based intrusion detection system (NIDS) has the ability to perform real-time traffic analysis and packet logging. Snort performs protocol analysis, content searching and matching. LANGuardian 12.4 now includes Snort version 2.9.7. This allows LANGuardian to take advantage of new keywords supported in IDS signatures for Snort 2.9, distributed from the ET Open project

SYSLOG Forwarding Feature

Many customers choose LANGuardian as it can integrate with existing tools like SolarWinds, McAfee or WhatsUp. Version 12.4 extends this functionality with the addition of a new configuration page to manage the forwarding of events to external syslog collector (SIEM) systems.

NetFort 12.4 – Network Traffic and Security Monitoring

This means you end up with a centralized dashboard for all network activity or as one customer described it “single point of reference for network and user activity monitoring and first stop in troubleshooting any issues”

NetFort 12.4 – Network Traffic and Security Monitoring

Version 12.4 is available from our download page and it can be deployed on physical or virtual platforms.

NetFort 12.4 – Network Traffic and Security Monitoring - download free trial NetFort 12.4 – Network Traffic and Security Monitoring - Web Demo

Thanks to Netfort for the article.

 

3 Reasons for Real Time Configuration Change Detection

So far, we have explored what NCCM is, and taken a deep dive into device policy checking – in this post we are going to be exploring Real Time Configuration Change Detection (or just Change Detection as I will call it in this blog). Change Detection is the process by which your NCCM system is notified – either directly by the device or from a 3rd party system that a configuration change has been made on that device. Why is this important? Let’s identify 3 main reasons that Change Detection is a critical component of a well deployed NCCM solution.


3 Reasons for Real Time Configuration Change Detection1.
Unauthorized change recording. As anyone that works in an enterprise IT department knows, changes need to be made in order to keep systems updated for new services, users and so on. Most of the time, changes are (and should be) scheduled in advance, so that everyone knows what is happening, why the change is being made, when it is scheduled and what the impact will be on running services.

However, the fact remains that anyone with the correct passwords and privilege level can usually log into a device and make a change at any time. Engineers that know the network and feel comfortable working on the devices will often just login and make “on-the-fly” adjustments that they think won’t hurt anything. Unfortunately as we all know, those “best intentions” can lead to disaster.

That is where Change Detection can really help. Once a change has been made, it will be recorded by the device and a log can be transmitted either directly to the NCCM system or to a 3rd party logging server which then forwards the message to the NCCM system. At the most basic level this means that if something does go wrong, there is an audit trail which can be investigated to determine what happened and when. It can also potentially be used to roll back the changes to a known good state

2. Automated actions.

Once a change has been made (scheduled or unauthorized) many IT departments will wish to perform some automated actions immediately at the time of change without waiting for a daily or weekly schedule to kick in. Some of the common automated activities are:

  • Immediate configuration backup. So that all new changes are recorded in the backup system.
  • Launch of a new discovery. If the change involved any hardware or OS type changes like a version upgrade, then the NCCM system should also re-discover the device so that the asset system has up-to-date information about the device

These automation actions can ensure that the NCCM and other network management applications are kept up to date as changes are being made without having to wait for the next scheduled job to start. This ensures that any other systems are not acting “blindly” when they try to perform an action with/on the changed device.

3. Policy Checking. New configuration changes should also prompt an immediate policy check of the system to ensure that the change did not inadvertently breach a compliance or security rule. If a policy has been broken, then a manager can be notified immediately. Optionally, if the NCCM system is capable of remediation, then a rollback or similar operation can happen to bring the system back into compliance immediately.

Almost all network devices are capably of logging hardware / software / configuration changes. Most of the time these can easily be exported in the form of an SNMP trap or Syslog. A good NCCM system can receive these messages, parse them to understand what has happened and if the log signifies a change has taken place – is then able to take some action(s) as described above. This real time configuration change detection mechanism is a staple part of an enterprise NCCM solution and should be implemented in all organizations where network changes are commonplace.

3 Reasons for Real Time Configuration Change Detection

Thanks to NMSaaS for the article.

Xplornet Explores IPO, Sources Say

Bloomberg reports that Canadian rural broadband specialist Xplornet Communications is working with Bank of Montreal and Royal Bank of Canada on an initial public offering (IPO) planned for this year, according to anonymous sources with knowledge of the matter who added that the New Brunswick-based fixed-wireless ISP has not yet decided on the stake size or price range. In May Xplornet’s management expressed the desire to examine new ways to continue growing, including a potential IPO, after revealing existing investors had put roughly CAD1 billion (USD758 million) into the company over the past decade.

Thanks to TeleGeography for the article.

The Importance of State

Ixia recently added passive SSL decryption to the ATI Processor (ATIP). ATIP is an optional module in several of our Net Tool Optimizer (NTO) packet brokers that delivers application-level insight into your network with details such as application ID, user location, and handset and browser type. ATIP gives you this information via an intuitive real-time dashboard, filtered application forwarding, and rich NetFlow/IPFIX.

Adding SSL decryption to ATIP was a logical enhancement, given the increasing use of SSL for both enterprise applications and malware transfer – both things that you need to see in order to monitor and understand what’s going on. For security, especially, it made a lot of sense for us to decrypt traffic so that a security tool can focus on what it does best (such as malware detection).

When we were starting our work on this feature, we looked around at existing solutions in the market to understand how we could deliver something better. After working with both customers and our security partners, we realized we could offer added value by making our decrypted output easier to use.

Many of our security partners can either deploy their systems inline (traffic must pass through the security device, which can selectively drop packets) or out-of-band (the security device monitors a copy of the traffic and sends alerts on suspicious traffic). Their flexible ability to deploy in either topology means they’re built to handle fully stateful TCP connections, with full TCP handshake, sequence numbers, and checksums. In fact, many will flag an error if they see something that looks wrong. It turns out that many passive SSL solutions out there produce output that isn’t fully stateful and can flag errors or require disabling of certain checks.

What exactly does this mean? Well, a secure web connection starts with a 3-way TCP handshake (see this Wikipedia article for more details), typically on port 443, and both sides choose a random starting sequence (SEQ) number. This is followed by an additional TLS handshake that kicks off encryption for the application, exchanging encryption parameters. After the encryption is nailed up, the actual application starts and the client and server exchange application data.

When decrypting and forwarding the connection, some of the information from the original encrypted connection either doesn’t make sense or must be modified. Some information, of course, must be retained. For example, if the security device is expecting a full TCP connection, then it expects a full TCP handshake at the beginning of the connection – otherwise packets are just appearing out of nowhere, which is typically seen as a bad thing by security devices.

Next, in the original encrypted connection, there’s a TLS handshake that won’t make any sense at all if you’re reading a cleartext connection (note that ATIP does forward metadata about the original encryption, such as key length and cipher, in its NetFlow/IPFIX reporting). So when you forward the cleartext stream, the TLS handshake should be omitted. However, if you simply drop the TLS handshake packets from the stream, then the SEQ numbers (which keep count of transmitted packets from each side) must be adjusted to compensate for their omission. And every TCP packet includes a checksum that must also be recalculated around the new decrypted packet contents.

If you open up the decrypted output from ATIP, you can see all of this adjustment has taken place. Here’s a PCAP of an encrypted Netflix connection that has been decrypted by ATIP:

The Importance of State

You’ll see there are no out-of-sequence packets, and no indication of any dropped packets (from the TLS handshake) or invalid checksums. Also note that even though the encrypted connection was on port 443, this flow analysis shows a connection on port 80. Why? Because many analysis tools will expect encrypted traffic on port 443 and cleartext traffic on port 80. To make interoperability with these tools easier, ATIP lets you remap the cleartext output to the port of your choice (and a different output port for every encrypted input port). You might also note that Wireshark shows SEQ=0. That’s not the actual sequence number; Wireshark just displays a 0 for the first packet of any connection so you can use the displayed SEQ number to count packets.

The following ladder diagram might also help to make this clear:

The Importance of State

To make Ixia’s SSL decryption even more useful, we’ve also added a few other new features. In the 1.2.1 release, we added support for Diffie Helman keys (previously, we only supported RSA keys), as well as Elliptic Curve ciphers. We’ve also added reporting of key encryption metadata in our NetFlow/IPFIX reporting:

The Importance of State

As you can see, we’ve been busy working on our SSL solution, making sure we make it as useful, fast, and easy-to-use as possible. And there’s more great stuff on the way. So if you want to see new features, or want more information about our current products or features, just let us know and we’ll get on it.

More Information

ATI Processor Web Portal

Wikipedia Article: Transmission Control Protocol (TCP)

Wikipedia Article: Transport Layer Security (TLS)

Thanks to Ixia for the article.

Virtualization Visibility

Virtualization Visibility

See Virtual with the Clarity of Physical

The cost-saving shift to virtualization has challenged network teams to maintain accurate views. While application performance is often the first casualty when visibility is reduced, the right solution can match and in some cases even exceed the capabilities of traditional monitoring strategies.

Virtual Eyes

Network teams are the de facto “first responders” when application performance degrades. For this reason, it’s critical to maintain visibility into and around all virtual constructs for effective troubleshooting and optimal service delivery. Otherwise, much of the value of server virtualization and consolidation efforts may be offset by sub-par application performance.

Fundamentally, achieving comprehensive visibility of a virtualized server environment requires an understanding of the health of the underlying resources, including host, hypervisor, and virtual switch (vSwitch) along with perimeter client, and application traffic.

In addition, unique communication technologies like VXLAN, and Cisco FabricPath must be supported for full visibility into the traffic in these environments. Without this support, network analyzers cannot gain comprehensive views into virtual data center (VDC) traffic.

Step One: Get Status of Host and Virtualization Components

The host, hypervisor, and vSwitch are the foundation of the entire virtualization effort so their health is crucial. Polling technologies such as SNMP, WSD, and WMI can provide performance insight by interrogating the host and various virtualized elements. A fully-integrated performance management platform can not only provide these views, but also display relevant operating metrics in a single, user-friendly dashboard.

Metrics like CPU utilization, memory usage, and virtualized variables like individual VM instance status are examples of accessible data. Often, these parameters can point to the root cause of service issues that may otherwise manifest themselves indirectly.

Virtualization Visibility

For example, poor response time of an application hosted on a virtualized server may have nothing to do with the service or the network, but may instead be tied to excessively high CPU utilization. Without this monitoring perspective, troubleshooting will be more difficult and time consuming.

Next Steps

Virtualization and consolidation offers significant upside for today’s dynamic data center model and in achieving optimal IT business service delivery. However, monitoring visibility must be maintained so potential application degradation issues can be detected and resolved before impacting the end user.

To learn more about how your team can achieve the same visibility in virtualized environments as you do in physical environments, download the complete 3 Steps to Server Virtualization Visibility White Paper now.

Thanks to Viavi Solutions for the article.