Why Just Backing Up Your Router Config is the Wrong Thing To Do

One of the most fundamental best practices of any IT organization is to have a backup strategy and system in place for critical systems and devices. This is clearly needed for any disaster recovery situation and most IT departments have definitive plans and even practiced methodologies set in place for such an occurrence.

Why Just Backing Up Your Router Config is the Wrong Thing To DoHowever what many IT pros don’t always consider is how useful it is to have backups for reasons other than DR and the fact that for most network devices (and especially routers), it is not just the running configuration that should be saved. In fact, there are potentially hundreds of smaller pieces of information that when properly backed up can be used for help with ongoing operational issues.

First, let’s take a look at the traditional device backup landscape, and then let’s explore how this structure should be enhanced to provide additional services and benefits.

Unlike server hard drives, network devices like routers do not usually fall within the umbrella backup systems used for mass data storage. In most cases a specialized system must be put in place for these devices. Each network vendor has special commands that must be used in order to access the device and request / download the configurations.

When looking at these systems it is important to find out where the resulting configurations will be stored. If the system is simply storing the data into an on-site appliance, then it also critical to determine if that appliance itself is being backup into an offsite / recoverable system otherwise the backup are not useful in a DR situation where the backup appliance may also be offline.

It is also important to understand how many backups your system can hold i.e. can you only store the last 10 backups, or maybe only everything in the last 30 days etc. are these configurable options that you can adjust based on your retention requirements? This can be a critical component for audit reporting, as well as when rollback is needed to a previous state (that may not just have been the last state).

Lastly, does the system offer a change report showing what differences exist between selected configurations? Can you see who made the changes and when?

In addition to the “must haves” explored above, I also think there are some advanced features that really can dramatically improve the operational value of a device / router backup system. Let’s look at these below:

  • Routers and other devices are more than just their config files. Very often they can provide output which describes additional aspects of their operation. To use the common (cisco centric) terminology, you can also get and store the output of a “show” command. This may contain critical information about the devices hardware, software, services, neighbors and more that could not be seen from just the configuration. It can be hugely beneficial to store this output as well as it can be used to help understand how the device is being used, what other devices are connected to it and more.
  • Any device in a network, especially a core component such as a router should conform to company specific policies for things like access, security etc. Both the main configuration file, as well as the output from the special “show” commands can be used to check the device against any compliance policy your organization has in place.
  • All backups need to run both on a schedule (we generally see 1x per day as the most common schedule) as well as on an ad-hoc basis when a change is made. This second option is vital to maintaining an up to date backup system. Most changes to devices happen at some point during the normal work day. It is critical that your backup system can be notified (usually via log message) that a change was made and then immediately launch a backup of the device – and potentially a policy compliance check as well.

Long gone are the days where simply logging into a router, getting the running configuration, and storing that in a text file is considered a “backup plan”. Critical network devices need to have the same attention paid them as servers and other IT systems. Now is a good time to revisit your router backup systems and strategies and determine if you are implementing a modern backup approach, as you can see its not just about backing up your router config.

Why Just Backing Up Your Router Config is the Wrong Thing To DoThanks to NMSaaS for the article.

 

Videotron Tests 1Gbps Broadband

Quebec-based quadruple-play operator Videotron has confirmed plans to introduce broadband speeds of 1Gbps-plus on its cable network, up from its current top speed of 200Mbps. A press release said that Videotron is enabling households and businesses in the Greater Montreal area to reach ‘and even exceed’ 1Gbps, with testing of the new service ongoing in the city.

Thanks to TeleGeography for the article.

Encryption: The Next Big Security Threat

As is common in the high-tech industry, fixing one problem often creates another. The example I’m looking at today is network data encryption. Encryption capability, like secure sockets layer (SSL), was devised to protect data packets from being read or corrupted by non-authorized users. It’s used on both internal and external communications between servers, as well as server to clients. Many companies (e.g. Google, Yahoo, WebEx, Exchange, SharePoint, Wikipedia, E*TRADE, Fidelity, etc.) have turned this on by default over the last couple of years.

Unfortunately, encryption is predicted to become the preferred choice of hackers who are creating malware and then using encrypted communications to propagate and update the malware. One current example is the Zeus botnet, which uses SSL communications to upgrade itself. Gartner Research stated in their report “Security Leaders Must Address Threats From Rising SSL Traffic” that by 2017, 50% of malware threats will come from using SSL encrypted traffic. This will create a serious blind spot for enterprises. Gartner also went on to state that less than 20% of firewalls, UTM, and IPS deployments support decryption. Both of these statistics should be alarming to anyone involved in network security.

And it’s not just Zeus you need to look out for. There are several types of growing encrypted malware threats. The Gartner report went on to mention two more instances (one being a Boston Marathon newsflash) of encryption being misused by malware threats. Other examples exist as well: the Gameover Trojan, Dyre, and a new Upatre variant just found in April.

Another key point to understand is that “just turning on encryption” isn’t a simple, low-cost fix, especially when using 2048-bit RSA keys that have been mandated since January 1, 2014. NSS labs ran a study and found that the decryption capability in typical firewalls reduced the throughput of the firewalls by up to 74%. The study also found an average performance loss of 81% across all eight vendors that they evaluated. Turning on encryption/decryption capabilities will cost you—both in performance and in higher network costs.

And, it gets worse! Firewalls, IPS’ and other devices are usually only deployed at the edge of enterprise networks. The internal network communications between server to sever and server to client often go unexamined within many enterprises. These internal communications can be up to 80% of your encrypted traffic. Once the malware gets into your network, it uses SSL to camouflage its activities. You’ll never know about it—data can be exfiltrated, virus’ and worms can be released, or malicious code can be installed. This is why you need to look at internally encrypted traffic as well. Constant vigilance is now the order of the day.

One way to implement constant vigilance is for IT teams is to spot check their network data to see if there are hidden threats. Network packet brokers (NPBs) that support application intelligence with SSL decryption are a good solution. Application intelligence is the ability to monitor packets based on application type and usage. It can be used to decrypt network packets and dynamically identify the applications running (along with malware) on a network. And since the decryption is performed on out-of-band monitoring data, there is no performance impact.

An easy answer to gain visibility, especially for internally encrypted traffic, is to deploy the Ixia ATI Processor. The Ixia ATI Processor uses bi-directional, stateful decryption capability, and allows you to look at both encrypted internal and external communications. Once the monitoring data is decrypted, application filtering can be applied and the information can be sent to dedicated, purpose-built monitoring tools (like an IPS, IDS, SIEMs, network analyzers, etc.).

Ixia’s Application and Threat Intelligence (ATI) Processor, built for the NTO 7300 and also the NTO 6212 standalone model, brings intelligent functionality to the network packet broker landscape with its patent pending technology that dynamically identifies all applications running on a network. This product gives IT organizations the insights needed to ensure the network works—every time and everywhere. This visibility product extends past Layer 4 through to Layer 7 and provides rich data regarding the behavior and locations of users and applications in the network.

As new network security threats emerge, the ATI Processor helps IT improve their overall security with better intelligence for their existing security tools. The ATI Processor correlates applications with geography and can identify compromised devices and malicious activities such as Command and Control (CNC) communications from malicious botnet activities. IT organizations can now dynamically identify unknown applications, identify security threats from suspicious applications and locations, and even audit for security policy infractions, including the use of prohibited applications on the network or devices.

To learn more, please visit the ATI Processor product page or contact us to see a demo!

Additional Resources:

Application and Threat Intelligence (ATI) Processor

NTO 7300

NTO 6212

Solution Focus Category

Network Visibility

Thanks to Ixia for the article.

Sapling Introduces New Slim Line Analog Clocks!

Sapling Introduces New Slim Line Analog Clocks!

As a part of our commitment to innovation, Sapling is proud to introduce our new Slim Line Analog Clocks featuring a lower profile, ABS case. The Slim Line Analog Clocks are offered with a black case, a brushed aluminum finish, or a wooden case and are available with all four (IP, Wired, Wireless, and TalkBack) Sapling synchronized clock systems!

Sapling Introduces New Slim Line Analog Clocks!Highlights of the new Slim Line Analog Clocks include:

  • More case options!
  • Solid Wood Case with Cherry Finish
  • Brushed Aluminum Finish
  • New and easier wall mount installation
  • Available in 9″, 12″ and 16″ diameter
  • Black Slim Line Analog Clock and Brushed Aluminum Slim Line Analog Clock can be used with the new Sapling Time Zone Clock
  • Shatterproof polycarbonate crystal with flatter front design
  • Stylish dial and hands options

Sapling’s Slim Line Analog Clocks are expected to be available by the fourth quarter of 2015.

Thanks to Sapling for the article.

New GigaStor Portable 5x Faster

New GigaStor Portable 5x Faster

Set up a Mobile Forensics Unit Anywhere

On June 22, Network Instruments announced the launch of its new GigaStor Portable 10 Gb Wire Speed retrospective network analysis (RNA) appliance. The new portable configuration utilizes solid state drive (SSD) technology to stream traffic to disk at full line rate on full-duplex 10 Gb links without dropping packets.

“For network engineers, remotely troubleshooting high-speed networks used to mean leaving powerful RNA tools behind, and relying on a software sniffer and laptop to capture and diagnose problems,” said Charles Thompson, chief technology officer for Network Instruments. “The new GigaStor Portable enables enterprises and service providers with faster links to accurately and quickly resolve issues by having all the packets available for immediate analysis. Additionally, teams can save time and money by minimizing repeat offsite visits and remotely accessing the appliance.”

Quickly Troubleshoot Remote Problems

Without GigaStor Portable’s insight, engineers and security teams may spend hours replicating a network error or researching a potential attack before they can diagnose its cause. GigaStor Portable can be deployed to any remote location to collect and save weeks of packet-level data, which it can decode, analyze, and display. The appliance quickly sifts through data, isolates incidents, and provides extensive expert analysis to resolve issues.

Part of the powerful Observer Performance Management Platform, the GigaStor Portable 10 Gb Wire Speed with SSD provides 6 TB of raw storage capacity, and includes the cabling and nTAP needed to install the appliance on any 10 Gb network and start recording traffic right away.

Forensic capabilities are an important part of any network management solution. Learn more about GigaStor Portable and how RNA can help protect the integrity of your data.

Thanks to Network Instruments for the article.

NMSaaS Webinar – Stop paying for Network Inventory Software & let NMSaaS do it for FREE.

NMSaaS Webinar - Stop paying for Network Inventory Software & let NMSaaS do it for FREE.Please join NMSaaS CTO John Olson for a demonstration of our free Network Discovery, Asset & Inventory Solution.

Wed, Jul 29, 2015 1:00 PM – 1:30 PM CDT

Do any of these problems sound familiar?

  • My network is complex and I don’t really even know exactly what we have and where it all is.
  • I can’t track down interconnected problems
  • I don’t know when something new comes on the network
  • I don’t know when I need upgrades
  • I suspect we are paying too much for maintenance

NMSaaS is here to help.

Sign up for the webinar NOW > > >

In this webinar you will learn that you can receive the following:

  • Highly detailed complimentary Network Discovery, Inventory and Topology Service
  • Quarterly Reports with visibility in 100+ data points including:
    • Device Connectivity Information
    • Installed Software
    • VM’s
    • Services / Processes
    • TCP/IP Ports in use
    • More…
  • Deliverables – PDF Report & Excel Inventory List

Thanks to NMSaaS for the article.

 

CRTC Mandates Wholesale Fibre Broadband Access

The Canadian Radio-television and Telecommunications Commission (CRTC) yesterday announced new measures in the wholesale fixed broadband market, including a ruling forcing the country’s largest internet providers to open up their high speed fibre-based access networks to smaller rivals. The regulator said the measures are to foster competition and provide Canadians with more choice and innovative services at reasonable prices.

Following an extensive review, the CRTC found that: ‘the large incumbent companies [including Bell Canada/Bell Aliant, Rogers Communications, Shaw Communications, Videotron and Telus Communications] continue to possess market power in the provision of wholesale high speed access services’ and it is therefore requiring that they make their new-generation services such as fibre-to-the-premises (FTTP) available to competitors. The summary continued: ‘The demand by Canadians for higher speed services will only increase in the coming years … Large incumbent companies will now have to make their fibre facilities available to their competitors. This measure will ensure that Canadians have more choice for high speed Internet services and are able to fully leverage the benefits of the broadband home or business.’

Another aspect of the CRTC’s decision involves scrapping the currently mandated ‘aggregated’ wholesale high speed access (HSA), which has enabled smaller competitors to lease a package of both the access facilities they need to connect to customer locations, and transport facilities, from larger incumbents, without requiring the smaller players to invest substantially in their networks. Under the new policy decision, the CRTC stated that: ‘The large incumbent companies will continue to be required to provide access to wholesale high speed access services throughout their region and transition this access to a disaggregated architecture. The provision of wholesale high speed access services on a disaggregated basis will be implemented in phases across Canada, starting with Ontario and Quebec.’ The regulator indicated that it will take up to three years to phase out aggregated access.

Furthermore, under the policy decision (Telecom Regulatory Policy CRTC 2015-326), copper unbundled local loops (ULLs), which in Canada the CRTC considers ‘a legacy service used primarily to support retail competition for local phone services and lower-speed Internet access’, will no longer be mandated and will be phased out.

The regulator also confirmed that ‘Ethernet and high speed competitor digital network services’, which are primarily used to support retail competition in the business data services market, will remain forborne from price regulation and not mandated.

Currently mandated wholesale services (ahead of the decision):

  • ULLs
  • Incumbent local exchange carriers’ (ILECs’) aggregated wholesale HSA service
  • Cablecos’ aggregated wholesale HSA service (also known as TPIA) service
  • Interconnection services
  • Public good’ services

Wholesale services mandated as a result of this decision:

  • ILEC and cableco disaggregated wholesale HSA services
  • FTTP access facilities
  • Interconnection services
  • Public good services

Wholesale services that are no longer mandated or that remain forborne from price regulation:

  • ULLs
  • Ethernet access and transport services
  • High speed CDN access and transport services.

The CRTC noted that its next steps include hammering out the details for implementing the wholesale HSA/transport service disaggregation, and setting the costs for wholesale fibre access.

Thanks to TeleGeography for the article.

NetFlow Auditor – Webinar July 30th – How to Deliver Immediate Cyber-Security Wins through Baseline Anomaly Detection

Managing the complexity of the IT infrastructure is a major challenge faced by organizations due to growing compliance, regulatory mandates and rising level of sophistication of cybercrime and increased virtualization of servers.

NetFlow Auditor’s Intelligent Baseline Diagnostics provides a dynamic approach to providing qualified intelligence to assist cybersecurity and network professionals in quickly mitigating potential impacts to networks and network-connected systems

Auditor is unique in its ability to consume and retain the full depth of network conversation metadata beyond other tools. Automated diagnostics coupled with detailed forensics delivers actionable and timely detection and intelligence translating to fast identification substantially improving mean time to know and reducing mean time to repair and limiting downtime.

Join John Olson of NetFlow Auditor on July 30th @ 2pm EST to learn:

  • How to de-risk and mitigate negative network impacts using perpetual network behavioral cognition and baselining.
  • How automated diagnostics accelerates problem determination to anticipate network security risks in real time .
  • How to extend NetFlow Auditor’s intelligent diagnostics to other monitoring tasks previously impossible via manual network analytics . and more…

Register for Webinar Here – http://hubs.ly/y0_Kzq0

Thanks to NetFlow Auditor for the article.

Server Log Files Do Not Always Have the Answer

In today’s world, security information and event management (SIEM) systems are hot technology. Some people deploy them because of compliance needs, others because they need access to data to troubleshoot problems. SIEM systems themselves are useless without sources of data and most of them connect to server log files and other network devices. The problem is that there are limitations with server log files when it comes to usability analysis.

Server Log Files Do Not Always Have the AnswerA good example of this is Ransomware. It is a big issue at the moment and most IT managers want to detect and get rid of it as soon as possible. This can be challenging when you have hundreds if not thousands of users on your network.

Once Ransomware gets into a network it starts to encrypt files and every time it moves from a directory to another, it leaves an instruction note within a text file that leads to a website or TOR network site. If an event can be triggered when these files are created then it would be an excellent start. However, as you can see in this sample event, no IP address is shown for the problematic device that is spreading the malware. This makes it difficult to block the device from accessing the network.

Event Type: Success Audit
Event Source:  Security
Event Category:  Object Access
Event ID:     560
Date:  2/24/2015
Time:    12:40:46 PM
User:  WIN2003DATABASE\Administrator
Computer:  WIN2003DATABASE
Description:
Object Open:
Object Server:  
Security
Object Type:  File
Object Name: C:\Downloads\test.txt
Handle ID:   5128
Operation ID:  {0,2612512}
Process ID:  4
Image File Name: WIN2003DATABASE$
Primary User Name:
Primary Domain:  WORKGROUP
Primary Logon ID:  (0x0,0x3E7)
Client User Name: Administrator
Client Domain: WIN2003DATABASE
Client Logon ID: (0x0,0x2708B4)
Accesses:   SYNCHRONIZE
ReadAttributes
Privileges: –
Restricted Sid Count: 0
Access Mask: 0x100080

Some people suggest setting up SPAN or mirror ports which are excellent data sources. The problem is that you may need to work through millions of packets to find useful information. Make no mistake about it, packet analysis can reveal crucial detail like IP addresses as you can see in the image below.

Server Log Files Do Not Always Have the Answer

You could now use log data and information from your Windows log to build a complete picture. While this might be an option for a small network with a few clients, it does not scale well. The next option to consider is a system like LANGuardian which does the packet analysis for you. It analyses the packets as they come in from a SPAN or mirror port and it extracts the important metadata. Metadata would include things like IP addresses, filenames and actions.

Server Log Files Do Not Always Have the Answer

Server Log Files Do Not Always Have the Answer

Systems like the LANGuardian can then export this information via SYSLOG or other formats to other network management systems which can then take an action.

Server log files do not always have the answer but there are other sources of data on your network.

Thanks to NetFort for the article.

CVE-2015-5119 and the Value of Security Research and Ethical Disclosure

The Hacking Team’s Adobe Flash zero day exploit CVE-2015-5119, as well as other exploits, were recently disclosed.

Hacking Team sells various exploit and surveillance software to government and law enforcement agencies around the world. In order to keep their exploits working as long as possible, Hacking Team does not disclose their exploits. As such, the vulnerabilities remain open until they are discovered by some other researcher or hacker and disclosed.

This particular exploit is a fairly standard, easily weaponizable use-after-free—a type of exploit which accesses a pointer that points to already free and likely changed memory, allowing for the diversion of program flow, and potentially the execution of arbitrary code. At the time of this writing, the weaponized exploits are known to be public.

What makes this particular set of exploits interesting is less how they work and what they are capable of (not that the damage they are able to do should be downplayed: CVE-2015-5119 is capable of gaining administrative shell on the target machine), but rather the nature of their disclosure.

This highlights the importance of both security research and ethical disclosure. In a typical ethical disclosure, the researcher contacts the developer of the vulnerable product, discloses the vulnerability, and may even work with the developer to fix it. Once the product is fixed and the patch enters distribution, the details may be disclosed publically, which can be useful learning tools for other researchers and developers, as well as for signature development and other security monitoring processes. Ethical disclosure serves to make products and security devices better.

Likewise, security research itself is important. Without security research, ethical disclosure isn’t an option. While there is no guarantee that the researchers will find the exact vulnerabilities held secret by the likes of Hacking Team, the probability goes up as the number and quality of researches increases. Various incentives exist, from credit given by the companies and on vulnerability databases, to bug bounties, some of which are quite substantial (for instance, Facebook has awarded bounties as high as $33,500 at the time of this writing).

However some researchers, especially independent researchers, may be somewhat hesitant to disclose vulnerabilities, as there have been past cases where rather than being encouraged for their efforts, they instead faced legal repercussions. This unfortunately discourages security research, allowing for malicious use of exploits to go unchecked in these areas.

Even in events such as the sudden disclosure of Hacking Team’s exploits, security research was again essential. Almost immediately, the vendors affected began patching their software, and various security researchers developed penetration test tools, IDS signatures, and various other pieces of security related software as a response to the newly disclosed vulnerabilities.

Security research and ethical disclosure practices are tremendously beneficial for a more secure Internet. Continued use and encouragement of the practice can help keep our networks safe. Ixia’s ATI subscription program, which is releasing updates that mitigate the damage the Hacking Team’s now-public exploits can do, helps keep network security resilience at its highest level.

Additional Resources:

ATI subscription

Malwarebytes UnPacked: Hacking Team Leak Exposes New Flash Player Zero Day

Thanks to Ixia for the article.