5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.


The Importance of State

Ixia recently added passive SSL decryption to the ATI Processor (ATIP). ATIP is an optional module in several of our Net Tool Optimizer (NTO) packet brokers that delivers application-level insight into your network with details such as application ID, user location, and handset and browser type. ATIP gives you this information via an intuitive real-time dashboard, filtered application forwarding, and rich NetFlow/IPFIX.

Adding SSL decryption to ATIP was a logical enhancement, given the increasing use of SSL for both enterprise applications and malware transfer – both things that you need to see in order to monitor and understand what’s going on. For security, especially, it made a lot of sense for us to decrypt traffic so that a security tool can focus on what it does best (such as malware detection).

When we were starting our work on this feature, we looked around at existing solutions in the market to understand how we could deliver something better. After working with both customers and our security partners, we realized we could offer added value by making our decrypted output easier to use.

Many of our security partners can either deploy their systems inline (traffic must pass through the security device, which can selectively drop packets) or out-of-band (the security device monitors a copy of the traffic and sends alerts on suspicious traffic). Their flexible ability to deploy in either topology means they’re built to handle fully stateful TCP connections, with full TCP handshake, sequence numbers, and checksums. In fact, many will flag an error if they see something that looks wrong. It turns out that many passive SSL solutions out there produce output that isn’t fully stateful and can flag errors or require disabling of certain checks.

What exactly does this mean? Well, a secure web connection starts with a 3-way TCP handshake (see this Wikipedia article for more details), typically on port 443, and both sides choose a random starting sequence (SEQ) number. This is followed by an additional TLS handshake that kicks off encryption for the application, exchanging encryption parameters. After the encryption is nailed up, the actual application starts and the client and server exchange application data.

When decrypting and forwarding the connection, some of the information from the original encrypted connection either doesn’t make sense or must be modified. Some information, of course, must be retained. For example, if the security device is expecting a full TCP connection, then it expects a full TCP handshake at the beginning of the connection – otherwise packets are just appearing out of nowhere, which is typically seen as a bad thing by security devices.

Next, in the original encrypted connection, there’s a TLS handshake that won’t make any sense at all if you’re reading a cleartext connection (note that ATIP does forward metadata about the original encryption, such as key length and cipher, in its NetFlow/IPFIX reporting). So when you forward the cleartext stream, the TLS handshake should be omitted. However, if you simply drop the TLS handshake packets from the stream, then the SEQ numbers (which keep count of transmitted packets from each side) must be adjusted to compensate for their omission. And every TCP packet includes a checksum that must also be recalculated around the new decrypted packet contents.

If you open up the decrypted output from ATIP, you can see all of this adjustment has taken place. Here’s a PCAP of an encrypted Netflix connection that has been decrypted by ATIP:

The Importance of State

You’ll see there are no out-of-sequence packets, and no indication of any dropped packets (from the TLS handshake) or invalid checksums. Also note that even though the encrypted connection was on port 443, this flow analysis shows a connection on port 80. Why? Because many analysis tools will expect encrypted traffic on port 443 and cleartext traffic on port 80. To make interoperability with these tools easier, ATIP lets you remap the cleartext output to the port of your choice (and a different output port for every encrypted input port). You might also note that Wireshark shows SEQ=0. That’s not the actual sequence number; Wireshark just displays a 0 for the first packet of any connection so you can use the displayed SEQ number to count packets.

The following ladder diagram might also help to make this clear:

The Importance of State

To make Ixia’s SSL decryption even more useful, we’ve also added a few other new features. In the 1.2.1 release, we added support for Diffie Helman keys (previously, we only supported RSA keys), as well as Elliptic Curve ciphers. We’ve also added reporting of key encryption metadata in our NetFlow/IPFIX reporting:

The Importance of State

As you can see, we’ve been busy working on our SSL solution, making sure we make it as useful, fast, and easy-to-use as possible. And there’s more great stuff on the way. So if you want to see new features, or want more information about our current products or features, just let us know and we’ll get on it.

More Information

ATI Processor Web Portal

Wikipedia Article: Transmission Control Protocol (TCP)

Wikipedia Article: Transport Layer Security (TLS)

Thanks to Ixia for the article.

Unique Features of Sapling’s IP Clock System

Unique Features of Sapling’s IP Clock System

Sapling’s IP Clock System is unlike your typical clock system. This synchronized clock system is powered by (PoE) or Power-over-Ethernet, receiving power and data through a CAT5 cable, there is no need for an additional outlet. This system allows for easy set up and operation for any user and has a number of unique features that make it stand out among the rest.

One such feature is that each clock has its own built-in web interface. The web interface allows a user to adjust many different settings or enable certain features on the clock. For example, a user can choose to set the time to display in 12 or 24 hour mode (digital clocks only), automatically update for Daylight Saving Time, both domestically and internationally and even give that particular clock a name, for example its location in a facility.

Features such as the brightness option can help companies and organizations become more energy efficient. For example, if you work in a school, it’s not a 24/7 operation. A school’s peak hours are typically between 7 a.m and 5 p.m., that’s a ten hour time frame in which the school building has a constant flow of traffic, with dozens of eye balls gazing at the clocks at one point or another. In those peak hours, the brightness is typically set to the high setting for maximum visibility. After school lets out for the day, the clocks aren’t being viewed as much. To conserve energy and money, Sapling gives a user the ability to adjust the brightness to medium, low or off (digital clocks only) after peak hours. This helps a school or any other type of facility save money over the life of the clock system. A user can also establish a brightness schedule for all the digital clocks within a facility.

Another feature that the IP clock system has is the ability to send email alerts on any changes or disruptions that occur. The email notification is sent right to the operator of the clock system if any power failures, major time changes, NTP/SNTP server synchronization issues, display faults or mechanical failures happen.

Sapling prides itself on being the forefront of technology, and our IP clock system, with its web interface allows custom ability of many settings (Brightness Settings, automatic Daylight Saving Time updates, email alerts, etc.) to help your company propel into the future. For more information, do not hesitate to contact us today!

Thanks to Sapling for the article.

Ready, Set, Go! Time Synchronization with Sapling Clocks

Ready, Set, Go! Time Synchronization with Sapling Clocks

For high school students, the short break in between classes is no relaxing matter. They have only a few minutes to get from one classroom to the next. If a student needs to pick up their books for the next class and must stop at their locker, this makes the trip even more challenging. Those few minutes in between classes can get hectic with hundreds of students flooding the hallways sharing the common goal of reaching their next destination on time. Most students are even unaware of how much time they have to reach their next class due to the discrepancy between the times their watch or cell phones display and the time the school clocks display.

A synchronized time keeping system can make the trip between classes more efficient for both teachers and students. If the time displayed on the school’s clocks is extremely accurate, then this time discrepancy will have less of an impact of a high school’s overall schedule. Upon eliminating this time dissimilarity, the amount of students late for class can go down and the overall amount of students who are penalized for nocuous activity can go down as well.

Sapling’s master clock can receive accurate time from any NTP server or GPS satellite. Another feature that Sapling’s master clock comes with is the ability to display a countdown in between classes for the roaming students. While the classes of a high school are switching, the time on the clocks will display a countdown on the display instead of the time. This will let student know exactly how long they have to get from one class to another.

Punctual students make it easier for the teachers of the high school to get through their entire lesson plan. They can start their lessons without being interrupted and they do not have to punish the student(s) for being late. With the assistance of The Sapling Company and the addition of their synchronized clock systems, both teachers and students will have a less hectic day.

Thanks to Sapling Clocks for the article.

Bell Gigabit Fibe Launched to 1.3m Homes

Bell Canada yesterday announced the official first-phase launch of its 1Gbps-capable direct fibre-based broadband service Gigabit Fibe, available to approximately 1.3 million homes in locations across Quebec and Ontario, enabling access to speed tiers of 15Mbps, 25Mbps, 50Mbps, 150Mbps, 300Mbps and 940Mbps. Bell’s CTO Stephen Howe announced in a press release that Gigabit Fibe will be made available to a further 650,000 premises in the Atlantic provinces in September, and to 250,000 more in Quebec and Ontario during this year; by the beginning of 2016 the telco intends to cover around 2.2 million homes in its Gigabit fibre-to-the-home (FTTH) network footprint. Bell Fibe customers in Ontario and Quebec who subscribe to a multi-service bundle can upgrade to Gigabit Fibe speeds for an additional CAD10 (USD7.62) a month.

In Ontario, Gigabit Fibe is available in parts of Brampton, Kingston, Kitchener-Waterloo, Milton, Ottawa, Peterborough and some neighbourhoods in Toronto. In June, Bell announced a CAD1.14 billion investment to roll out fibre to more than one million homes and businesses across the City of Toronto, creating 2,400 direct jobs. Today, Gigabit Fibe is available to approximately 50,000 homes in the Toronto neighbourhoods of Regent Park, the Distillery District, Harbourfront and Willowdale.

The Gigabit Fibe footprint also covers homes in communities across Quebec, including Bell Canada’s first fully-covered fibre city, Quebec City (where it commercially launched FTTH services in March 2012), as well as locations in Beloeil, Blainville, Chambly, Chateauguay, Gatineau, Joliette, La Prairie, Laval, Levis, Magog, Repentigny, Saint-Constant, Saint-Eustache, Saint-Jean-sur-Richelieu, Saint-Jerome, Saint-Luc, Sherbrooke, Salaberry-de-Valleyfield, Sorel-Tracy, Terrebonne, Vaudreuil-Dorion and more than 85,000 homes in Montreal.

Thanks to TeleGeography for the article.

Advanced Packet Filtering with Ixia’s Advanced Filtering Modules (AFM)

An important factor in improving network visibility is the ability to pass the correct data to monitoring tools. Otherwise, it becomes very expensive and aggravating for most enterprises to sift through the enormous amounts of data packets being transmitted (now and in the near future). Bandwidth requirements are projected to continue increasing for the foreseeable future – so you may want to prepare now. As your bandwidth needs increase, complexity increases due to more equipment being added to the network, new monitoring applications, and data filtering rule changes due to additional monitoring ports.

Network monitoring switches are used to counteract complexity with data segmentation. There are several features that are necessary to perform the data segmentation needed and refine the flow of data. The most important features needed for this activity are: packet deduplication, load balancing, and packet filtering. Packet filtering, and advanced packet filtering in particular, is the primary workhorse feature for this segmentation.

While many monitoring switch vendors have filtering, very few can perform the advanced filtering that adds real value for businesses. In addition, filtering rules can become very complex and require a lot of staff time to write initially and then to maintain as the network constantly changes. This is time and money wasted on tool maintenance instead of time spent on quickly resolving network problems and adding new capabilities to the network requested by the business.

Basic Filtering

Basic packet filtering consists of filtering the packets as they either enter or leave the monitoring switch. Filtering at the ingress will restrict the flow of data (and information) from that point on. This is most often the worst place to filter as tools and functionality downstream from this point will never have access to that deleted data, and it eliminates the ability to share filtered data to multiple tools. However, ingress filtering is commonly used to limit the amount of data on the network that is passed on to your tool farm, and/or for very security sensitive applications that wish to filter non-trusted information as early as possible.

The following list provides common filter criteria that can be employed:

  • Layer 2
    • MAC address from packet source
    • VLAN
    • Ethernet Type (e.g. IPv4, IPv6, Apple Talk, Novell, etc.)
  • Layer 3
    • DSCP/ECN
    • IP address
    • IP protocol ( ICMP, IGMP, GGP, IP, TCP, etc.)
    • Traffic Class
    • Next Header
  • Layer 4
    • L4 port
    • TCP Control flags

Filters can be set to either pass or deny traffic based upon the filter criteria.

Egress filters are primarily meant for fine tuning of data packets sent to the tool farm. If an administrator tries to use these for the primary filtering functionality, they can easily run into an overload situation where the egress port is overloaded and packets are dropped. In this scenario, aggregated data from multiple network ports may be significantly greater than the egress capacity of the tool port.

Advanced Filtering

Network visibility comes from reducing the clutter and focusing on what’s important when you need it. One of the best ways to reduce this clutter is to add a monitoring switch that can remove duplicated packets and perform advanced filtering to direct data packets to the appropriate monitoring tools and application monitoring products that you have deployed on your network. The fundamental factor to achieve visibility is to get the right data to the right tool to make the right conclusions. Basic filtering isn’t enough to deliver the correct insight into what is happening on the network.

But what do we mean by “advanced filtering”? Advanced filtering includes the ability to filter packets anywhere across the network by using very granular criteria. Most monitoring switches just filter on the ingress and egress data streams.

Besides ingress and egress filtering, operators need to perform packet processing functions as well, like VLAN stripping, VNtag stripping, GTP stripping, MPLS stripping, deduplication and packet trimming.

Ixia’s Advanced Feature Modules

The Ixia Advanced Feature Modules (AFM) help network engineers to improve monitoring tool performance by optimizing the monitored network traffic to include only the essential information needed for analysis. In conjunction with the Ixia Net Tool Optimizer (NTO) product line, the AFM module has sophisticated capability that allows it to perform advanced processing of packet data.

Advanced Packet Processing Features

  • Packet De-Duplication – A normally configured SPAN port can generate multiple copies of the same packet dramatically reducing the effectiveness of monitoring tools. The AFM16 eliminates redundant packets, at full line rate, before they reach your monitoring tools. Doing so will increase overall tool performance and accuracy.
  • Packet Trimming – Some monitoring tools only need to analyze packet headers. In other monitoring applications, meeting regulatory compliance requires tools remove sensitive data from captured network traffic. The AFM16 can remove payload data from the monitored network traffic, which boosts tool performance and keeps sensitive user data secure.
  • Protocol Stripping – Many network monitoring tools have limitations when handling some types of Ethernet protocols. The AFM16 enables monitoring tools to monitor required data by removing GTP, MPLS, VNTag header labels from the packet stream.
  • GTP Stripping – Removes the GTP headers from a GTP packet leaving the tunneled L3 and L4 headers exposed. Enables tools that cannot process GTP header information to analyze the tunneled packets.
  • NTP/GPS Time Stamping – Some latency-sensitive monitoring tools need to know when a packet traverses a particular point in the network. The AFM16 provides time stamping with nanosecond resolution and accuracy.

Additional Resources:

Ixia Advance Features Modules

Ixia Visibility Architecture

Thanks to Ixia for the article.

What if Sony Used Ixia’s Application and Threat Intelligence Processor (ATIP)?

Trying to detect intrusions in your network and extracting data from your network is a tricky business. Deep insight requires a deep understanding of the context of your network traffic—where are connections coming from, where are they going, and what are the specific applications in use. Without this breadth of insight, you can’t take action to stop and remediate attacks, especially from Advanced Persistent Threats (APT).

To see how Ixia helps its customers gain this actionable insight into the applications and threats on their network, we invite you to watch this quick demo of Ixia’s Application and Threat Intelligence Processor (ATIP) in action. Chief Product Officer Dennis Cox uses Ixia’s ATIP to help you understand threats in real time, with the actual intrusion techniques employed in the Sony breach.


Additional Resources:

Ixia Application and Threat Intelligence Processor

Thanks to Ixia for the article.