GigaStor Eclipses The Competition

Now capable of storing 96 TB of network and application traffic for analysis in a single chassis, the GigaStor Expandable scales to over a petabyte and beyond with additional units. It is the highest-capacity retrospective network analysis solution on the market today.*

gigastor-more

To further put the size of a petabyte into perspective – if the average smartphone camera photo is 3MB in size and the average printed photo is 8.5 inches wide, then the assembled petabyte of photos placed side by side would be over 48,000 miles long, almost long enough to wrap around the equator twice.
Source: ComputerWeekly.com

Form Factor Deployment Capacity Networks Rack Size

gigastor-calc-1

Expandable

Data Center
Server Access Layer
Long-Term Retention
Data Center
Server Access Layer
Long-Term Retention
1 Gb, 10 Gb & 40 Gb 5U

 

GigaStor Expandable capacities can begin at 96 TB (increasing by 96 TB increments) to 288 TB, thereafter growing in 288 TB amounts to 576 TB, 864 TB, and over a petabyte of packet capture storage. The Expandable product line offers in-the-field scalability to meet growing performance monitoring needs on gigabit, 10 Gb, and 40 Gb links.

GigaStor provides network forensics and data retention solutions to network teams for retrospective network analysis and troubleshooting. Using GigaStor’s analytics, users navigate to the exact moment a problem occurred, view packet-level details around the event, and resolve the issue. GigaStor, along with other Observer Platform solutions, saves significant time troubleshooting and eliminates having to recreate the performance problem.

*Compared to primary vendors Netscout, Riverbed, and Fluke, Network Instruments’ GigaStor provides the largest long-term packet capture and storage capacity in a single appliance.

Thanks to Network Instruments for the article.

Shaw’s Quarterly Revenues Inch Up 1%

Consolidated revenue at Canadian pay-TV, broadband and telephony operator Shaw Communications climbed 1% year-on-year to CAD1.34 billion (USD1.25 billion) in the three months ended 31 May 2014 – the company’s fiscal third quarter. Total operating income before restructuring costs and amortisation of CAD601 million improved 3% over the year-ago quarter, while Shaw posted consolidated quarterly net profit of CAD228 million for the three-month period, compared to CAD250 million a year earlier. Shaw’s cable TV customers reached 1,977,795 at the end of May (down by 12,075 during fiscal Q3), while cable internet customers increased by 12,399 to 1,918,418. Cable digital phone lines also rose, by 4,834 to 1,374,220 at end-May. Satellite (DTH) TV subscribers stood at 887,229 at the end of the period under review (down by 5,608 in three months). Revenue in the cable division in March-May 2014 improved by 2% y-o-y to CAD845 million, and cable-only operating income before restructuring costs and amortisation grew 5% to CAD417 million. Shaw also announced in its quarterly report that it continued to invest in its ‘Shaw Go’ Wi-Fi network and now has over 40,000 Wi-Fi hotspots and over one million devices registered on the network.

Thanks to TeleGeography for the article. 

State Of The Network: Analyst Jim Frey’s View/ Q&A

JDSU’s Network Instruments business released its seventh annual “State of the Network Global Study.” Of the findings – after polling 241 network professionals – a provoking theme emerged: 80 percent of respondents viewed software defined networks (SDN) as unimportant or were just going to “ride out the hype” – saying it is “Like a Road Trip Without a Map.”

frey2A leading authority in the enterprise network management space with a lot of experience covering this sector is Jim Frey, vice president at Enterprise Management Associated (EMA), which conducts IT & data management research, industry analysis and consulting.

Inspired by the recent study, I reached out to Jim – who has briefed regularly with the Network Instruments team – for his take on the state of network today as he sees it, and his view of where the market is headed.

Q: Jim, what do you feel are the top priorities – opportunities and/or challenges – for the enterprise sector at a time when SDN and Big Data are all the rage?

Looking at where SDN is in its maturation, the timing is still very early and it is not having a big impact yet. Big data – the collection, analysis and sharing of data by network management systems – is rapidly growing in importance as a company asset. Here is what I see as top priorities and megatrends as SDN and big data emerge within larger enterprises.

First, cloud and virtualization adoption rates are very high and topics to keep an eye on. The cloud is enabled by virtualization and that dynamic creates added challenges for performance monitoring. The ability to understand how services perform when deployed in the cloud, especially from an end-user perspective, makes visibility a critical – and challenging – aspect of effective resource monitoring and management.

In addition, making sense of the large amounts of network monitoring data and what can be done with it are key. In particular, we see network teams taking a more proactive approach to handle performance challenges as they find hidden trends and relationships between application and infrastructure performance using the latest big data analysis techniques.

Lastly, I’d say another big trend taking hold is the continuous market transition between separate siloes or functional areas in IT operations, moving towards more converged operations and IT organizations. This has some direct impact on choices of network management tools and how they get used over time.

Q: The “State of the Network” study spotlighted that a fair amount of network managers and engineers agree that SDN is a “Road Trip Without a Map”? Were you surprised to see that finding?

This is very consistent with the conclusion that I have had in my own research. I’ve found that only 20 percent have taken part in any degree of SDN deployments. And, less than 10 percent are in full deployment. The remaining are at the early stages of research, testing, or evaluation with many network managers still trying to figure out where and how to use it. They are asking: “Is there enough pain in my current network to warrant an alternative like SDN?” Most organizations – most mainstream enterprises – would answer, “No, not enough to warrant embracing these relatively immature technologies at this time.”

Important to note is the fact that there are two main types of SDN. One type is the overlay network, which is purely software/virtual; the other is OpenFlow-based, which we call an underlay SDN. Of the two, the virtual overlay is a more natural extension of existing virtual networks, as it uses tunneling and the existing network, so no new physical infrastructure is required. However, because it is typically managed by system administrators with little input from the network team, there are two areas of potential challenges. The first is to address network capacity planning and the second to ensure visibility into the encapsulated, tunneled traffic for satisfactory service delivery monitoring.

Q: A big takeaway seems to be that now more than ever service providers and enterprises must be equipped with technology that provides key visibility and troubleshooting. How important are these capabilities for enterprise and service providers to deliver quality services, build reliable networks?

Absolutely – as you can tell from prior answers, visibility has never been more important. In the current environment there are more layers of technology, more dynamic aspects that come with virtualization, and with that, the more you must understand what’s going on and be able to piece together the full story. For example, how is that app being delivered? From where it’s being hosted to the consumer, what’s happening along the way to make it work? And, how is the quality and efficiency, what is experience of the consumer or end user?

The picture is getting a whole lot more complex – all of the trends we have addressed – virtualization, SDN, big data, cloud…they all drive a heightened need for deep and accurate visibility.

Thanks to Network Instruments for the article.

Infosim Global Webinar Day- The Benefits Of Implementing Cost Effective Next-Hop-Performance (NHP)

Infosim® Global Webinar Day

The Benefits of implementing Cost Effective Next-Hop-Performance (NHP)

Infosim® recently announced a new cost effective agent utilizing the Raspberry Pi and the Utilite hardware technology for use with its StableNet® Telco and Enterprise product range. ISP/MSP/CSPs and Enterprise customers now have the ability to deploy Next-Hop-Performance Visualization without the need to configure any networking component management functionality.

Infosim's David PoultonJoin David Poulton, COO Infosim® UK, for this webinar and see how you can benefit from this new technology from Infosim®.

A recording of this webinar will be available to all who register!

For more information on Next-Hop-Performance (NHP) click here.

Thanks to Infosim for the article. 

 

Videotron Aims To Be Fourth National Cellco

Pierre Dion, CEO of Quebecor, the parent group of Quebec-based cable and cellular network operator Videotron, this week gave a clear message that his company intends to utilise its recently-won 700MHz spectrum to roll out a 4G network in provinces across Canada whilst aiming for consolidation with rival operators to create the fourth national rival to Rogers, Bell and Telus. CBC News quotes Dion’s speech at the Canadian Telecom Summit, in which he declared: ‘Our vision is to provide Canadians with a new high quality, low-cost wireless choice and real wireless competition … Under the right conditions, we are ready, willing and able to become Canada’s fourth wireless competitor.’ The statement of intent follows a speech from rival Wind Mobile’s CEO Anthony Lacavera, which as reported by CommsUpdate, made it clear that Wind is looking for a merger deal, while financially stricken Mobilicity is another target for consolidation.

Thanks to TeleGeography for the article.

Wind Still Has Plenty Of Gas, Puffs Lacavera

Wind Mobile’s CEO Anthony Lacavera gave a speech yesterday at the Canadian Telecom Summit in Toronto laying out his company’s current position, in which he stressed that the crucial factor in enabling the mid-sized
Canadian cellco to grow will be access to more mobile spectrum for rolling out 4G LTE services – potentially by acquiring frequencies owned by rivals. As reported by Bloomberg, Lacavera said that a merger with financially stricken Mobilicity or another spectrum holder remains desirable in order to compete better with Canada’s three nationwide mobile operators Rogers, Telus and Bell, after Wind was unable to participate in the country’s latest licence auction due to main shareholder Vimpelcom withholding any new significant investment. Meanwhile, the Russian-backed parent group continues to consider its options for its Canadian subsidiary, where it has been blocked from taking voting control by a government decision.

Combining Wind with Mobilicity or ‘other new entrant spectrum’ would be of a significant benefit to both merged parties, Lacavera’s speech noted, while admitting that ‘the challenge we at Wind face is securing any one of these sources of spectrum at terms and value levels that the business of Wind can support, and within the timeframe that the spectrum is needed to meet LTE demand.’ Although Mobilicity’s assets could potentially be snapped up at a discount in bankruptcy proceedings, the court process presents obstacles, the Wind CEO observed previously. Despite the challenges faced, Lacavera has scotched claims that Wind is in danger of folding, calling such reports ‘categorically false,’ and his speech yesterday forecast that the company will turn EBITDA profitable in 2015 on the back of its continuing upturn in both revenues and subscriber numbers. He added that Wind had 735,000 subscribers at the beginning of June after adding 21,000 net new users (including 14,000 post-paid) in May (up from a total of 702,000 subscribers at end-March 2014 and 676,000 at end-December 2013, TeleGeography adds). In a previous interview Lacavera stated Wind needs to find funding of CAD400 million to CAD500 million (USD370 million to USD460 million) to build an LTE network, which would take around two years to complete – while it needs to do this in the next three to four years to compete effectively with Rogers, Bell and Telus, i.e. it must secure the necessary funding and spectrum within two years. In further comments from the Telecom Summit speech quoted by Reuters, the Wind executive declared: ‘I am confident our operating results and the positive momentum of the business will enable us to access the capital markets.’

Thanks to TeleGeography for the article. 

 

Network ‘Security Cameras’ Offer Insight after Attacks

2014-jun-banner Security is a top concern for anyone running a network today. The recent Heartbleed bug and other high-profile hacks have shined a new light on the fact that security strategies in the enterprise must constantly evolve to meet the challenges of new and ever-changing attacks.

In the spring of 2014 news of the Heartbleed bug hit the wire sending the tech world scrambling to patch the now notorious OpenSSL vulnerability that allowed individuals to steal private information like certificate keys, passwords, and other content commonly used to breach systems or impersonate users.

“Heartbleed is a programming bug in the 1.0.1 through 1.0.1f versions of OpenSSL,” says EMA Director of Security and Risk Research, David Monahan on the specific versions affected by the vulnerability. “The only means of identifying if the bug is exploited would be doing network sniffing from a network analyzer perspective while the attack was taking place, or with a retrospective analysis tool.”

Network tools with back-in-time analysis like Network Instruments’ GigaStor are designed to watch the network, analyze conversations, identify issues, and alert administrators to problem scenarios. These features make them an excellent tool to help identify and isolate unauthorized activity. In addition to the regular assortment of firewalls and other defensive security measures, network forensic tools like these can be used after an event to identify both known and unknown attacks, speeding the cleanup process.

“You can get a lot more detail if you have the entire packet stream,” says Monahan. “In a case like Heartbleed, you get the details on what was sent back in the communication between the client and your server. You can say, ‘Yes, we were attacked,’ and review the communications exchange to see what data was returned to the attacker. You can determine whether they got passwords or private keys, or any other confidential information, or they just got other areas of information that, in and of itself, are not valuable.”

Solutions like GigaStor provide critical details post-event, allowing the user to reconstruct the entire conversation, view the information compromised, or focus on the content extracted. How did the attacker gain access? What was stolen?

“You need advanced traffic analytics and filtering to identify and correct issues such as Heartbleed,” says Network Instruments Professional Services Manager, Casey Louisiana. “With GigaStor, I was able to rapidly build a filter to detect its signature, set up an alarm to alert on it, and then quickly process terabytes of data to determine if the network had been compromised,” he says, calling attention to the need for tools to contain and solve threats after the security perimeter has been breached.

“Beyond the ability to perform layer two through seven analysis, many users don’t appreciate that GigaStor provides sophisticated packet pattern recognition that is ideal for addressing security threats,” Louisiana says. “GigaStor, like a security camera in a retail store, requires a place to store the data that’s coming in. This is often immense amounts of data for extended periods of time. But it’s the ‘secret sauce’ or ‘magic’ of the advanced analytics and pattern recognition that makes sense of the mass of data.”

Not all tools offer this analytical capability, but it has proven essential in the enterprise where breaches are not immediately detected and the ability to “rewind” back to an attack-in-progress can yield more insight.

“With large enterprises there is so much data that’s coming back,” says Monahan. “It’s impossible for humans to actually process.”

Network Instruments’ GigaStor packet capture appliance offers storage of up to a petabyte of data, and used in conjunction with a network analyzer, plays a significant role for transaction-heavy organizations in data mining, security forensics, and data retention compliance.

“The two most critical points for identifying malware intrusions are to know what’s going on in the network and what’s going on at the end point,” says Monahan. “To obtain the detail for the network level, you really need the packets.”

One of the most potentially damaging aspects of these types of breaches is that many network teams have no idea they are being hacked despite both the number of tools on the market, and the solutions already installed on their networks and not being utilized.

“We need to do a better job of helping our customers see that there’s value in this type of approach,” says Louisiana on GigaStor’s packet capture function and its use in post-event network forensics. “Heartbleed is a recent example, but what’s the next Heartbleed? How do you look for abnormal traffic patterns? It’s about establishing a baseline and knowing that anything happening outside of that is abnormal. Many of our customers are not using our tools in that way.”

“When we need more information, retrospective network packet capture and analysis is one of the ways to do that,” says Monahan. “And it’s a very strong way. There are basically two ways to get something malicious on the network. I can take a USB-type removable media and plug it in, and infect my endpoint or I’m going to get it downloaded across the network. If it just sits on that machine and it never goes anywhere else, is it really a threat? Probably not. At some point it’s going to need to communicate to home base to transfer information or create that bot network. If you have something watching the network, you’re going to catch that. Network analyzers can definitely be a portion of that capability. They will see the communication assuming proper placement. From that point we can see what’s out there, what data it’s trying to move, how often it’s communicating, and determine how malicious it really is.”

An attentive security team, the right network analyzer, and enough storage for effective post-event retrospective analysis are powerful weapons in the constant battle to keep threats off your network. These elements working interdependently can create strong defenses that are higher in value than the sum of their parts. “Having both trained personnel and the right tooling are key,” says Monahan. “You can’t do it with either one alone.”

Thanks to Network Instruments for the article.

Rogers Publishes CAD450m British Columbia Wireless Development Plans

Rogers Communications has announced a planned investment of over CAD450 million (USD414 million) in the province of British Columbia (BC) over the next three years to expand its wireless network in over 70 communities in the northeast, Interior, Lower Mainland, and Vancouver Island regions. As part of this investment, Rogers will also enhance existing LTE coverage with 700MHz spectrum, reaching more rural and urban customers and strengthening the LTE signal including deep coverage in buildings, elevators and basements. When complete, Rogers will have invested CAD2 billion in its network in BC.

Rogers has already rolled out 700MHz spectrum to communities in Vancouver, offering improved signal quality in basements, elevators and in buildings with thick concrete walls, and will continue to expand to other parts of the province. The operator aims to meet consumer demand for data by deploying LTE to over 98% of the province’s population. Rogers quoted statistics that over 64% of BC residents own a smartphone and use it on average 1.7 hours per day to check email (77%), search online (69%) and stream video (33%). It also noted that BC’s growing economy, which includes 385,900 small businesses, will benefit from the LTE network investment, while the northeast, ‘which continues to lead the province in the number of self-employed British Columbians’, was identified as a key region in the network expansion.

Thanks to TeleGeography for the article

Release Notes for LANforge FIRE & ICE 5.2.12

candela-lanforgefirecandela-lanforgeice

 

Release 5.2.12

New Features & Improvements:

  • Support 802.11AC (ath10k) hardware, with customized firmware. This enables basic 802.11AC features with up to 36 stations per NIC. Some instability remains when using this hardware, so reboots may be required to recover.
  • Improve CLI scripts, especially lf_portmod.pl, lf_firemod.pl and the new NFS testing script: lf_nfs_io.pl
  • Add ‘set_flag stream_events’ CLI command to enable CLI to receive events (as seen in GUI Event tab) as they are created.
  • Add Thresholds for Layer-3 and Armageddon endpoints. Allows user to set min/max tx and rx rate bounds and if the connection throughput is outside of that an alert and/or event will be created. Good for longer term throughput tests.
  • full support for Layer-3, Armageddon endpoints.
  • Only tx/rx rate and no-rx-since supported on File-IO endpoints.
  • Add option to verify file-system remains mounted on expected file-system type for File-IO endpoints. See ‘Use FSTATFS’ checkbox in the File-IO modify window.
  • WiFi: Enable support for configuring IEEE80211w (Management Frame Protection) for Virtual Station interfaces.
  • WiFi: Support 802.11r combined with 802.11u (HS20/Passpoint)
  • Support Ubuntu 14.04 with automated lf_kinstall.pl script.
  • Update LANforge Live-CD image to be Lubuntu 14.04 (lightdm desktop Ubuntu) based system.
  • GUI: Add plugin to automate resetting ports (network interfaces).

Bug Fixes:

  • WiFi: Fix setting regulatory domain so that it works independently of the time-zone configured in the operating system.
  • Fix issue with configuring randomized MACs when creating interfaces on the Port-Mgr GUI screen, and similar fix to Port batch-modify logic.
  • Make sure GUI’s global-stats never includes stats from stopped endpoints.
  • Fix memory consumption problem where Layer-3 TCP endpoints would get confused and think they need to buffer up to 16MB before completing the receive of the frame. LANforge now detects the corrupted header and automatically re-starts the connection when this error hits. Root cause of the corruption is not known at this time: It could be kernel and/or ath9k (wifi a/b/g/n) driver related, or could be bug in how LANforge processes the Layer-3 TCP connection logic. It could also be expected behaviour related to LANforge disabling the ‘time-wait’ state for TCP connections in order to re-use IP ports faster. To date, this has only been reproduced when running stress-tests on Layer-3 connections on ath9k station interfaces.

The only work-around for earlier releases is to manually stop, delete, and re-create connections in this state.

The root cause will be further investigated.

Thanks to Candela for the article.

The Key Components of a Visibility Architecture

More mobile devices are now connecting to more data from more sources. IT challenges are complicated by increasingly high customer expectations for always-on access and immediate application response. This complexity creates network “blind spots” where latent errors germinate, and pre-attack activity lurks. Stressed-out monitoring systems make it hard, if not impossible, to keep up with traffic and filter data “noise” at a rate that they were not designed to handle. Network blind spots have become a costly and risk-filled challenge for network operators.

The answer to these challenges is a highly scalable visibility architecture that enables the elimination of your current blind spots, while providing resilience and control without added complexity.

Over the past few months I have had the pleasure to fly around the world and help train IT leaders on network monitoring architectures. During this time I have found that many people have questions about the core building blocks that make up an effective network monitoring solution like Ixia’s Visibility Architecture. In case some of you have similar questions, the video below (5:12) walks through the key components that are common in the industry and how each contributes to improving network monitoring effectiveness.

Brainshark-video

If you are looking for a brief summary, here is how data flows through a Visibility Architecture:

  • Network Taps: These are the access devices that replicate network data and send it off to the monitoring infrastructure. While SPAN ports are often used for this as well, and are useful in many situations, here is a great article that walks through some differences between these access technologies.
  • Network Packet Broker: Gathers the data from the network access points above and performs advanced filtering, deduplication, and aggregation of traffic to prep it for network tools
  • Network Monitoring Tools: Monitoring tools take the network traffic from the packet broker and provide analysis on network health, trends, and other types of insight to network operators.
  • Management: Provides many-to-one management of taps, packet brokers, etc. so the network monitoring infrastructure is easy to manage and integrates well with the network operations already in place.

I hope this helps you get a feel for some of the core building blocks in Ixia’s Visibility Architecture. As always, feel free to reach out to us and we’ll be happy to help answer any questions!

Additional Resources:

Ixia’s Visibility Architecture solution

Thanks to Ixia for the article.