JDSU’s Network Instruments Unveils Observer Platform 17 for Simplified Performance Management and Application Troubleshooting

JDSU Network Instruments  Observer Platform 17

Network Instruments, a business unit of JDSU (NASDAQ: JDSU), has announced the latest edition of its cornerstone Observer® Performance Management Platform. Observer Platform 17 provides network managers and engineers a comprehensive, intuitive approach to:

  • Proactively pinpoint performance problems and optimize services.
  • Integrate monitoring into the deployment of IT initiatives, including cloud, service orchestration and security.
  • Easily manage access and share performance data with IT teams and business units.
  • Quickly assess and optimize user experience with web services.

This comprehensive performance management solution also features redesigned, easy-to-use interfaces and workflows, expanded web user visibility, increased transactional intelligence, and enhanced integration with third-party tools and applications.

“The Observer 17 release uniquely positions the Platform for the future of IT and the network manager’s evolving role as a primary enabler of technology adoption throughout the enterprise and as a key troubleshooter,” said Charles Thompson, chief technology officer for Network Instruments. “Other IT teams are looking to the network team for greater application troubleshooting and support. Utilizing the newest features in Observer, they are well prepared for their constantly changing role by achieving quicker root-cause analysis, understanding applications in-depth, and easily sharing performance data with non-network teams.”

Key enhancements include:

  • Intuitive, drag-and-drop interface and streamlined workflows transform Network Performance Management from an ad hoc practice to a proactive, collaborative process. IT can now manage, monitor, and assess performance in two clicks.
  • Third-party system integration via Representational State Transfer (RESTful) APIs facilitate easier sharing of performance intelligence with other groups, as well as integration of monitoring technologies for successful service orchestration for cloud, Software-Defined Networking (SDN) and virtual deployments.
  • Enhanced Web Services Analytics provide greater insight into how end users are experiencing web services through expanded client-based and comparative details.
  • Deeper transaction-level intelligence and correlative analysis allow for quicker and more effective application troubleshooting. With access to greater granularity, network teams are able to more easily assess relationships between transaction details and other performance variables for a higher degree of actionable insights.

The new Observer Platform features allow network professionals a more productive tool to stay on top of key IT trends and challenges such as:

IT Automation—As businesses increasingly transition to automated, self-service IT models involving the deployment of cloud, SDN and virtualized environments, these dynamic services are often being implemented without adequate monitoring. To minimize the ‘black holes’ created when users roll out or move IT services and resources without the network team’s knowledge, the new Observer Platform ties together the provisioning of IT resources with the automatic deployment of monitoring tools via RESTful API for complete visibility.

IT Alignment Across the Business—In delivering Unified Communications and Big Data initiatives, application teams now turn to network teams to lead the charge for metrics, intelligence and app problem solving, and troubleshooting. RESTful APIs and improvements in user management make it easier to integrate Observer 17 into external workflows and processes to help share network performance data with non-network teams.

Monitoring Mobile Experience—With movements to the cloud and increased access to web services by end users on mobile devices, Observer 17 brings a higher level of visibility and insight into how they are experiencing web services, regardless of the device. Observer now provides comparative visibility by browser type and operating system, alongside performance metrics, to determine if the user experience is the same via a desktop or mobile device.

Observer Platform 17 is currently available and includes Observer Apex™ (previously called Observer Reporting Server), Observer Management Server (formerly called Network Instruments Management Server), Observer GigaStor™, Observer Analyzer, and Observer Infrastructure products.

Thanks to JDSU Network Instruments for the article.

Exposing the Ghost in the Virtual Machine

Virtualization continues to be an important data center success story. This technology has reduced physical plant hardware footprints, power costs, and cooling costs – significantly improving the total cost of ownership for the data center. At the same time, however, virtualization technology has also introduced a set of problems.

These problems have decreased the visibility into what is happening within the virtual network, and can be summarized as follows:

  • Lack of visibility hides potential security and performance problems for a significant period of time
  • Added additional network and monitoring complexity
  • A lack of consistent monitoring policies between the physical and virtual networks

In case you think it’s just me saying this, Forrester Research performed a study for Cisco Systems on this subject and validated these three technology concerns, along with two additional concerns relating to manpower resourcing for virtualization technology. The chart below (from the Forrester virtualization security study) was a questionnaire that determined the actual virtualization issues and concerns for almost 80 IT decision makers.

Ixia's phantom vtap

The first concern for data center managers was security. This is for good reasons. There are malware threats like “Crisis” that are capable of targeting virtual environments. Without proper visibility, these threats can gain a foothold and then flourish within your data center and you won’t even know it – until it’s too late. Another item related to this, but not mentioned in the Forrester study, is that performance problems can also be hidden by this lack of visibility. Without proper monitoring of virtual equipment, your data center may experience performance problems without you even knowing about them. Common examples include: slow traffic and devices, unnecessary bandwidth consumption, intermittent issues that pop-up long enough to be noticed but then disappear quickly, etc. Once the problems become noticeable, it’s often too late as something bad is normally going to happen. Hopefully it’s not an outage.

While virtualization introduced the benefit of the flexibility to spin up new virtual machines (VMs) almost at will, it has also caused complexity. Since the VMs can be spun up and down so quickly, it can become difficult to know what VMs exist, if and where they might have been moved to (in the case of VMware vMotion) and what policies, especially security policies, were set up for the VM’s when they were created. This rapid pace can end up creating “holes” that can be exploited by malware.

In addition, because most of the traffic (up to 80% according to Gartner) in the virtual data center never reaches the top of the rack, there are few, if any, network monitoring policies in place for the virtual equipment. This creates a huge blind spot where security and performance issues can hide. In short, you can’t monitor what you can’t measure.

Lastly, without access to the proper data, it can become exceedingly difficult to audit data and security policies for various compliance regulations (FISMA, HIPAA, PCI, etc.). So, you may be compliant on the physical side of the network but non-compliant on the virtual side. In the end, this makes you non-compliant which can have financial penalties as well as legitimate security risks.

In addition, a lack of a consistent monitoring policy often results from the natural division that has occurred between the physical and virtual infrastructure. IT physical infrastructures have been around for long enough that implementation and network monitoring policies have been tried and tested. So much so that best practices are commonly available. This is not the case for virtualized data centers. Monitoring policies are often non-existent and when they do exist, they are usually not aligned with the policies of the physical infrastructure. This is partially due to the lack of technology to properly expose the virtual data but it also comes about because the virtual data center is often run by a different department than the person(s) monitoring the physical data center. This means that the monitoring tool engineers often have little or no access to the virtual data center and can’t fold those systems into the overall network monitoring strategy.

These are the ghosts that hide within the virtual machine. While knowing that the ghosts exist is a start to exposing them, the real elimination of the problem is accomplished by installing a virtual Tap. Like it’s physical counterpart, the hardware-based Tap, a virtual Tap is software that is installed into the virtual data center so that data can be exported northbound and out of the data center to network packet brokers and monitoring tools that can then process and analyze the data.

Virtual Taps (like the Ixia Phantom vTap) can be used to remediate the issues mentioned earlier:

  • Potential hidden security issues
  • Provide access to data for trending
  • Allow proper compliance policy tracking

At the same time, if the virtual Tap is integrated into a holistic visibility approach using a Visibility Architecture, you can streamline your monitoring costs because instead of having two separate monitoring architectures with potentially duplicate equipment (and duplicate costs), you have one architecture that maximizes the efficiency of all your current tools, as well any future investments. When installing the virtual Tap, the key is to make sure that it installs into the Hypervisor without adversely affecting the Hypervisor. Once this is accomplished, the virtual Tap will have the proper access to inter and intra-VMs that it needs, as well as the ability to efficiently export that information. After this, the virtual Tap will need a filtering mechanism so that exported data can be “properly” limited so as not to overload the LAN/WAN infrastructure. The last thing you want to do is to cause any performance problems to your network. Details on these concepts and best practices are available in the whitepapers Illuminating Data Center Blind Spots and Creating A Visibility Architecture.

As a side note, some people propose using the VMware vSphere “promiscuous mode” to get access to the data. This approach is fraught with danger as you expose the data to various security risks. In addition, using promiscuous mode might cause delays in transmitting data due to the overhead and summarizing processes. Normally, the best course of action is to use a dedicated Tap that can provide some level of QoS.

In my experience, bad things happen in blind spots and YOU don’t want to be the one to have to explain them to senior management. More information about the Ixia Phantom vTap and how it can help generate the insight needed for your business is available on the Ixia website.

Additional Resources:

Ixia Phantom vTap

White Paper: Illuminating Data Center Blind Spots

White Paper: Creating A Visibility Architecture

Thanks to Ixia for the article. 

Cogeco Would Consider MVNO Launch If regulator Helps

Canadian triple-play cableco Cogeco says it will consider launching a mobile virtual network operator (MVNO) service if the Canadian Radio-television and Telecommunications Commission (CRTC) implements stricter regulation regarding wholesale access to the infrastructure of national mobile network operators Rogers, Telus and Bell. As reported by Reuters, Cogeco’s CEO Louis Audet said that he hoped for new regulations to encourage MVNOs, in a statement to the press ahead of CRTC hearings on wholesale mobile services scheduled for next week. Audet told reporters that Cogeco would only move forward with an MVNO plan ‘if there is … an enforceable order to give [wholesale mobile network] access, and if the rates at which this access is provided are actually dictated by the regulator.’ However, the federal government has previously focused more heavily on encouraging facilities-based competition – aiming to have four viable network operators in every province – rather than boosting the MVNO sector.

Thanks to TeleGeography for the article.

BCE Concludes Initial Phase Of Aliant Share Offer; 100% Buyout Expected By End-October

Telecoms group Bell Canada Enterprises (BCE) and its subsidiary Bell Aliant yesterday announced in a press release the initial results of BCE’s offer to purchase all Bell Aliant’s outstanding publicly held common shares and to exchange all outstanding Bell Aliant preferred shares. BCE revealed in July that it would buy out all minority shareholders in Bell Aliant for a total consideration of approximately CAD3.95 billion (USD3.68 billion); sellers will receive cash and BCE common equity. 81.2% and 72.7% of the respective outstanding publicly held common and preferred shares had been validly tendered to BCE’s offer by 19 September, and BCE expects to pay for these common shares on 24 September, while on the same day it expects to issue the new BCE preferred shares which will commence trading on the Toronto Stock Exchange the following day. As all conditions of the common/preferred share offers have been satisfied, and all regulatory approvals have been received, BCE expects to take Bell Aliant private on or around 31 October 2014.

Additionally, BCE has extended the common share offer, in accordance with its terms, to 2 October 2014 in order to enable holders of common shares who have not yet tendered to deposit their shares to the offer prior to the completion of the privatisation (delisting) of Bell Aliant. BCE expects to pay for all such shares tendered in the extension period by 7 October 2014.

If at least 90% of the publicly held common shares of Bell Aliant are tendered to the offer following its extension, BCE intends to acquire the balance of the common shares not tendered through compulsory acquisition on or around 31 October. If less than 90% of the publicly held common shares are tendered by 2 October BCE intends to use its voting power to force through the acquisitions of the remaining common shares at a meeting of Bell Aliant shareholders on 31 October (while the same applies to all remaining preferred shares).

Thanks to TeleGeography for the article. 

Release Notes for LANforge FIRE & ICE 5.2.13

Candela Lanforge FireCandela Lanforge Ice

New Features & Improvements:

  • WiFi: Enforce licensing for 802.11AC NICs. LANforge will now only allow WiFi stations to be used if there is an 80211AC license key for the wiphy device. Customers that have previously purchased 802.11AC should request a new license from their sales contact before upgrading to LANforge 5.2.13. There will be no charge for the new key in this case. 
  • Improve Layer-3 UDP performance, especially with smaller packets. Testing shows about double the number of packets-per-second when compared to previous LANforge releases. Should be able to generate near Gbps speed with MTU sized PDUs on moderately powerful hardware now.
  • Add CLI script to stop Layer-4 connections after a number of URL requests or time has elapsed.
  • Add Rate vs Attenuation graphs to 2544 and Hunt scripts.
  • 802.11AC: Allow faster connection rate for 802.11AC stations. It was kept slow to work around some crashes seen in earlier kernels/firmware, but these problems are now fixed.
  • 802.11AC: Ensure we do not pass un-encrypted frames with FCS errors up the stack. This may help with problems where LANforge received corrupted UDP frames and reset the connection, causing throughput glitches.
  • 802.11AC: Support up to 64 vdevs (virtual stations). If sniffing, a separate monitor interface may be preferred, and in that case only 63 stations can be configured.
  • Layer-4: Allow explicit enabling and disabling of FTP PASV ane EPSV options. PASV works better through firewalls and NAT than does PORT (which is used if PASV is disabled).
  • Layer-4: Improve rate-control when using bits-per-second limit. Now, the bits-per-second rate will be adhered to across URLs instead of just when processing a single URL. The rate will be the lesser of the urls-per-second and the bits-per-second configuration.
  • WiFi: WiFi capacity plugin now supports cloning a Layer-4 (http, ftp, etc) connection in order to do the capacity testing. This should help test scenarios where it is not convenient to have a LANforge machine on the upstream side of the network: Any web/ftp server can now be the upstream side.
  • Support 4-module Attenuator.
  • Scripts: Add ability to create, start, stop, and delete multicast endpoints using the lf_firemod.pl script. Add lf_mcast.bash to show how this might be used.
  • WiFi: Improve RTS Threshold (RTS/CTS) and Fragmentation Threshold support. Values are set properly now, configured on the wiphyX radio devices. Note that even when disabled, RTS/CTS will be used when the network adapter has to retry sending a packet. This is due to an optimization in the Linux rate control algorithm.
  • WiFi (802.11AC): Add Alert to let user know when the ath10k 802.11AC firmware has crashed and requires reboot to recover. Check the Alerts tab if your 802.11AC system is not able to bring up stations or APs, and reboot the LANforge if the “FW-FAIL” alert is listed.

Bug Fixes:

  • Fix ‘radvd’ IPv6 router-advertisement daemon usage in Netsmith. This probably doesn’t affect anyone or we would have found the problem sooner!
  • Fix potential kernel corruption related to ARP.
  • Hunt-Script: Report best rate, not just the last successful rate.
  • Fix ToS problem in Armageddon feature: Could not set ToS to values less than 0x10 due to kernel bug.
  • Fix WEP authentication when 802.1x configuration was previously configured for the station. The 802.1x config options are now properly ignored when WEP is enabled.
  • Fix some issues with DHCPv6. Re-order the dhclient program’s socket code a bit to bind to device before binding socket. This appears to fix problem where only the first dhclient process could reliably receive leases from the server.
  • Fix Netsmith problem with saving DHCPv6 client information if DHCPv6 DNS address was not configured.

Thanks to Candela for the update.

 

Network Visibility Solutions- Solution Brief

Optimal Visibility and Control of Your Data Center

Ixia’s portfolio provides complete network visibility into physical and virtual networks, improves network security and optimizes monitoring tool performance. The Anue NTO ensures that each monitoring tool gets exactly the right data needed for analysis. This improves the way you manage your data center and maximizes return on investment.

What Our Customers Say

“The Anue NTO makes my life easier. It makes complex data monitoring simple and allows us to leverage existing network monitoring infrastructure.” Chris Lindner, Director, Tools & Engineering, Fiserv

Download the Solutions Brief here

Ixia Network Visibility Solutions

Additional Resources

7300 A New Perspective on Network Visibility

5260 Flexible Centralized or Distributed Monitoring

5268 Flexible Centralized or Distributed Monitoring

5293 High-Density, Carrier-Grade

5288 High-Density 10GE/401GE/100GE

5273 High-Availability, Carrier-Grade

5236 Enterprise Class

5204 Small Enterprise

 

 

Building a Better IVR: Some Tips for Success

The IVR is an integral part of any call center. Customers can call the center and get what they need without ever talking to a human, allowing agents greater availability to handle the more pressing calls. Yet despite their obvious advantages, most IVR systems aren’t perfect and can benefit from some improvements.

How do you know when your IVR is due for some revamping? One of the best ways is to dial the number yourself. By putting yourself in the shoes of the customer, you’ll get to experience the IVR just as they do. If you find the IVR confusing or feel the desire to hang up in frustration, there’s a good chance your customers feel the same way. And that means it’s probably time to build a better IVR.

So, where do you start? Elaine Cascio, Vice President of Vanguard Communications Corporation and an expert in customer experience, shares some tips for ways to improve your IVR.

Make it Familiar and Easy to Use – Cascio’s first suggestion is to make your IVR more inviting. Again, if you wouldn’t want to stay on the call with your own IVR, there’s little chance your customers are that fond of it either. Cascio suggests to “emulate the processes used by agents wherever possible.” Just as how an agent will try to keep the call as brief and efficient as possible, in order to meet service level and move on to the next call, the same theory should be in place for the IVR. You don’t want to overload the customer with too much information, have them listen to a long, drawn-out message or give them too many steps to get the information they need.

Be Conversational – While brevity is important when developing an IVR, you should still make your IVR conversational and easy to understand. Cascio says to focus on what is being said and how it is presented. She states, “Don’t use jargon – use clear, concise and commonly understood language and terminology.” Additionally, she recommends putting the time into creating a script that is not only easy to follow and inviting, but also reflects the character and services of your brand. Furthermore, it is a good idea to read the scripts aloud before putting them into production and conduct focus groups to see how people respond to the content. Cascio also suggests hiring experienced voice talent to record the messaging.

Leverage Data and Technology – Just as customers will be put off if they have to listen to more information than is necessary, they shouldn’t have to provide the same information each time they call. The IVR can be set up to recognize the number calling or ask callers for their unique pass code and use information already on file to speed the process along. Cascio recommends that IVR systems have the capability for callers to transfer to a live agent, in case they need further assistance. If so, the caller’s data should be easily transferrable from the IVR to the agent, so they don’t have to provide their information once again.

Manage and Measure – Cascio stresses that it is important to perform quality monitoring on calls handled by the IVR in addition to calls handled by live agents. Doing so will keep you informed about how your customers are handling the IVR, and allow you to make sure that it is working properly and helping the business meet its goals. Cascio adds, “Make sure that your measures of success are strategic, customer centric and make a difference in how your business operates.” Quality monitoring can also provide key insight into how successful the IVR is. If a large percentage of callers are hanging up or spending too much time on the IVR, then you’ll know that it may need some more work.

Test, Test and Test Again – Of course, once you’ve made the necessary improvements to the IVR system, you’ll want to make sure they are right. That’s where testing comes in. Cascio recommends setting up focus groups to see if the average caller will understand the language and processes presented by the IVR. Additionally, usability tests will let you know if the IVR is usable and intuitive, and comprehensive user acceptance tests should be conducted prior to deployment.

A good IVR will make the contact center’s operations more efficient and be more cost effective. On the other hand, a poorly developed or executed IVR can be inefficient and wasteful and might even scare your customers away. Since this is the first and often only interaction they’ll have with your business, you’ll want them to be greeted by a coherent and easy-to-use IVR system. By using these tips, you’ll be well on your way to creating a better IVR and, in time, enjoying increased operational efficiency and more customer loyalty.

Thanks to ICMI for the article. 

Vimpelcom Exits Wind Mobile: Sells Out To Consortium Of Canadian Owners/Investment Funds

Russian-backed telecoms group Vimpelcom and its Egyptian-based subsidiary Global Telecom Holding (GTH) jointly announced today that they have agreed to sell all of their debt and equity interest in the Globalive group of companies in Canada, including Globalive Wireless Management Corp (Wind Canada), the operator of the Wind Mobile cellular network. Canada’s Globalive Capital (formerly AAL Holdings), the controlling shareholder of Wind Canada, and a group of investment funds are acquiring the entire Vimpelcom/GTH share for approximately CAD135 million (USD121.9 million), with the proceeds going to Vimpelcom in repayment of part of the debt owed to it, the group’s press release disclosed. At the same time, GTH will be released from all of its obligations under a related Shareholders Agreement and certain debt obligations of Wind Canada. The transaction is expected to close ‘shortly’, the release added. Under the deal, Globalive’s chairman and CEO Anthony Lacavera will continue to lead Wind Mobile, in partnership with investors led by Canadian hedge fund West Face Capital and US-based Tennenbaum Capital, the Globe & Mail reported earlier.

Vimpelcom, which is US-listed and headquartered in the Netherlands but remains majority Russian-owned, entered Wind Mobile’s ownership through its acquisition of Egypt’s Orascom Telecom Holding (later renamed GTH) in 2011. Globalive Capital currently controls a 66.7% voting interest and 34.3% economic stake in Wind Mobile, whilst GTH controls 65.08% of equity and 32.02% of voting shares in the cellco. The Canadian government blocked GTH from increasing its voting share to a majority interest in 2013 for undisclosed reasons, causing Vimpelcom/GTH to withhold additional investment in the venture – meaning that it boycotted Canada’s 700MHz 4G mobile licence auction. It is hoped that the takeover by the Canadian-US investment consortium should enable Wind to participate in further upcoming spectrum auctions.

Thanks to TeleGeography for the article.

Taking a Quantum Leap in Network Visibility

In our area of technology, we often think of our products in terms of how they compare to the rest of the products in the same market segment. Maybe we can highlight one facet of a unique feature and point out how nobody else offers it – or at least not in that way. It is tempting to compare features line-by-line when you have competitors who offer products that are generally similar. But now I have the opportunity to talk about a market where nobody else has gone. Ixia can show something that is truly game-changing for our customers.

Application intelligence (the ability to monitor packets based on application type and usage) is now available to provide the application and user insight that is desperately required. This technology is the next evolution in network visibility.

Application intelligence can be used to dynamically identify all applications running on a network. Distinct signatures for known and unknown applications can be identified and captured to give network managers a complete view of their network. In addition, well designed visibility solutions will generate additional (contextual) information such as geolocation of application usage, network user types, operating systems and browser types that are in use on the network.

So let’s just get this out of the way. Nobody else has anything like Ixia’s Application and Threat Intelligence (ATI) Processor. I can’t talk about how others stack up, because they just don’t have anything like this at all.

So rather than do a typical competitive analysis line-by-line, I am going to walk through the solution high points that raised eyebrows and engaged customers at the recent Cisco Live and Black Hat tradeshows.

What Is It?

Ixia’s ATI Processor is best described like this:

  • It’s a fully-featured 48x10G NTO blade that populates the 7300 chassis. It enables all standard visibility features that are so popular on the NTO: best-in-class GUI with drag-and-drop configuration, advanced filter compiler, 48x10G/1RU port density, all the stuff we already know and love. It is not some strange new thing you don’t understand and don’t know how to put into your network. It’s at the core a completely functional blade for the 7300. Think of it like 3/4 of a 5288 on a blade.
  • Did I mention, this is a normal NTO blade? Yes, you are going to use this in your visibility network you are already deploying. And of course it will talk to all the other ports and resources in the 7300 chassis.
  • It has, hidden inside, a whole different kind of product. This is the ATI Processor Resource. The ATI Processor Resource dramatically extends what can be done to monitor the network traffic that is already being passed through the blade.
    • Using the technology we learned from our BreakingPoint acquisition, the ATI Processor can recognize applications based on signatures, which involve much more than just domain names, or TCP port numbers, or the other things we have traditionally used to have to use to try and classify application traffic. The system comes pre-loaded with hundreds of signatures for known applications; and it can learn new ones on the fly as traffic happens in real time.
    • All kinds of details about these applications are revealed in the ATI Processor Dashboard, which runs in a browser window (not in the NTO Java-GUI).
      • IP addresses (source/destination)
      • Geography (city, country, latitude and longitude, both source and destination)
      • Application Identifier
      • You can create filters to watch these things
      • Orthogonal views are available; “why is someone in <insert country where we don’t have an office> accessing our SVN repository? WHO is it?”
      • Many other things. All the stuff you can’t do with ordinary network statistics, we can do with ATI Processor.
    • NetFlow can be generated based on all of these new data detected by the ATI Processor. Not just any NetFlow, but also IxFlow, which extends NetFlow with dozens of new fields including all of the interesting stuff like geography and application ID.
    • This IxFlow is integrated into more and more of the NetFlow tools you are already using, like Splunk and Plixer.
    • Multiple NetFlow exporters are supported
    • NetFlow can be assigned to any 1 port on the ATI Processor card
    • All ports on the card share this ATI Processor resource. Traffic sent to the ATI Processor resource goes through a dynamic filter that is attached to the ATI Processor and configured in the NTO GUI. You can filter what traffic you want to be analyzed in this filter. Traffic goes into this filter and the output is preset to go to the ATI Processor. The ATI Processor is, therefore, sort of “out of band” to the flow of traffic from network to tool ports in the NTO. You just attach the port you want to monitor to the ATI Processor filter, and voila, it gets the ATI Processor treatment. It doesn’t affect the other uses of that port for traditional visibility.

Counterpoints

“Hey, I thought you said nobody else has this AT ALL. Lots of products do NetFlow!”

OK, good point. But they don’t have IxFlow, which is where all the cool stuff is. And they don’t have our dashboard. AND they don’t do this in the context of the visibility network you have already deployed. They certainly don’t do this in the class-leading visibility tool such as the NTO.

“Yeah, well half the switch vendors and IDS and firewalls also do NetFlow, and I already have those things in my network . . .”

Remember, I said not to get hung up on NetFlow. We are talking about IxFlow! And switches and firewalls don’t:

  • Perform the kind of filtering we do, especially hitless with our killer GUI and all the other reasons you can’t use a switch in place of an NTO
  • Handle the rate of traffic flow the ATI Processor can handle and generate IxFlow
  • Integrate with your existing visibility architecture
  • Have access to traffic at all the points where you currently have Taps
  • Integrate traffic flows with other advanced features… like do NetFlow plus deduplication plus 1µs accurate timestamping plus load balancing...
  • Seriously, are you going to put a switch inline with every NTO port just to get it to generate NetFlow on the traffic you are monitoring? That doesn’t make much sense. It’s simpler, less expensive, and much more effective to just deploy the ATI Processor.

“Speaking of the dashboard, doesn’t this make us a competitor with tools like Splunk and Plixer?”

No. While the ATI Processor dashboard is very useful for configuration and some general debugging and visibility, it is not a dedicated and refined reporting tool on the level of our tool partners like Splunk and Plixer. However, our IxFlow greatly enhances what you can get out of a tool like Splunk or Plixer. I like to think of it like this: the ATI Processor supercharges your NetFlow reporting tools that you already have!

OK, You Have My Attention. How Do I Know it Will Work For Me?

The biggest challenge I have experienced regarding the ATI Processor is not in the value it brings or the utility of the solution. Mostly it’s just that customers are not used to looking for something like this from Ixia. Here’s what I learned from customers at Black Hat and Cisco Live.

  1. If you are thinking of Ixia’s NTO products, you are already interested in network visibility. You care about monitoring the traffic on your network. You understand the value of keeping an eye on what your users are doing, being able to debug issues on the fly, and you have invested in tools and resources to make this work. The ATI Processor blade in a 7300 is a natural part of this visibility plan.
  2. In the classic sense, the NTO has always monitored network traffic in terms of bytes and addresses and VLANs and that kind of thing. But when a user on your network has an outage, they experience it in terms of the application. You don’t get a call into the help desk saying, “all VLAN 19 traffic with destination 192.168.4.7 is being dropped at my desk”. You get a call saying, “I can’t complete a VOIP call” or “why can’t I connect to our SVN server”. Users see the network in terms of applications and you should have a way of monitoring it in those same terms. The ATI Processor delivers just that.
  3. There are some ways where visibility of application-based traffic indicators such as those shown on the ATI Processor can be critical to your business. For example, let’s say you have an internal network with data on it that is shared across many geographies, but is important to be kept secure. That data may be summarized as an application, like Salesforce.com, Perforce or Exchange. You probably would like to know it when someone from a geography where you don’t have an office is accessing those applications on your intranet, right? Or what if you see a discovered dynamic app called “Paypal.com” show up on your network. That’s spoofing! You didn’t even know to look for it, which is why spoofing works for the bad guys. Wouldn’t you like to be able to rapidly see who all of the users are on the network who have used this application so you can notify them of the breach?
  4. Also, maybe there is application traffic behaviors that indicate changes in your customers’ patterns that you would like to know. For example, if you are a cable provider who also offers internet service, you would probably like to track the trend of the use of your own VoD service vs. competitors like Vudu and NetFlix, right? You’d like to know when a new competitor pops up, right? This gives you visibility not only into packets and network behaviors, but also into the potential future of your business.
  5. Remember, the ATI Processor dynamically learns about new applications when they come up on your network. Nothing is going to catch you off guard.

Now, the key to all of this is, you are already deploying a visibility network. That’s the key. Don’t think about the ATI Processor as a whole new thing you have to deploy which also happens, by the way, to have 48 ports of NTO on it. Think of it as a 48-port NTO blade that does everything you need for traditional visibility, plus a ton of other things you really want but didn’t even know we offered.

In hundreds of conversations with Ixia visibility customers and those interested in our visibility products, not once has anyone told me they were not interested in what the ATI Processor does. On the contrary, most of you are already thinking about this problem and may even be actively working on solving it, but you just didn’t know to ask our sales team about it, because maybe you think you are stuck going to other sources for this kind of thing. Now you know that Ixia offers a tool that does this, especially as an accessory to their class-leading NTO. This is game-changing!

Not Just For NTO Users

The reality is that IxFlow supercharges what you can do with a tool like Plixer or Splunk. You may very well already be a user of a NetFlow analysis tool, but you might not yet be an Ixia NTO user. You have bought and invested in the kind of thing the ATI Processor does best, but you are not using it to its full potential. The ATI Processor is essential to get all of the value from your existing NetFlow tool. We are not competing with these NetFlow analysis tools, we are enhancing those tools while also offering superior visibility.

As you consider the need for both traditional network visibility offered by NPBs as well as NetFlow analysis, truly the best way to accomplish this is with an integrated solution that delivers superior network visibility as well as superior NetFlow capability. The ATI Processor is not only an upgrade over other NPBs on the market, but it is also an upgrade to your NetFlow tools.

The ATI Processor is a game-changing quantum leap in the network visibility space, and really allows the NTO to go where no other packet broker has gone before.

Additional Resources:

NTO Application and Threat Intelligence Processor

Ixia NTO solutions

Ixia Network Visibility Architecture

Thanks to Ixia for the article. 

Infosim Global Webinar Day- A View Of Infosim® StableNet®

It´s good to be #1!

A view of Infosim® StableNet® based on criteria and results of the EMA Radar™ for Enterprise Network Availability Monitoring System (ENAMS): Q3 2014

Join John Olson, VP Technical Services Americas and Paul Krochenski, Regional Manager for a webinar on: “A view of Infosim® StableNet® based on criteria and results of the EMA Radar™ on ENAMS: Q3 2014”. This webinar will provide you with comprehensive insight into the strengths and advantages of Infosim® StableNet® based on the following criteria of the EMA Radar™ on ENAMS:

  • Deployment & Administration
  • Cost Advantage
  • Architecture & Integration
  • Functionality
  • Vendor Strength

A recording of this webinar as well as a free download of the “EMA Radar™ for ENAMS Summary and Infosim® Profile” will be available to all who register!

Register today and reserve your seat in the desired timezone:

AMERICAS, Thursday, September 25th, 3:00pm – 3:45pm EDT (GMT-4)
EUROPE, Thursday, September 25th, 3:00pm – 3:45pm CEST (GMT+2)
APAC, Thursday, September 25th, 4:00pm – 4:45pm SGT (GMT+8)

Thanks to Infosim for the article.