To SPAN or to TAP – That is the question!

Ixia Network Visibility Solutions welcomes a guest blogger today, Tim O’Neill from LoveMyTool.

Network engineers and managers need to think about today’s compliance requirements and the limitations of conventional data access methods. This article is focused on TAPs versus port mirroring/SPAN technology.

SPAN is not all bad, but one must be aware of its limitations. As managed switches are an integral part of the infrastructure, one must be careful not to establish a failure point. Understanding what can be monitored is important for success. SPAN ports are often overused, leading to dropped frames due to the fact that LAN switches are designed to groom data (change timing, add delay) and extract bad frames as well as ignore all layer 1 & 2 information. Furthermore, typical implementations of SPAN ports cannot handle FDX monitoring and analysis of VLAN can also be problematic.

Moreover, when dealing with data security compliance, the fact that SPAN ports limit views and are not secure transporting monitored traffic through the production network could prove itself to be unacceptable in the court of law.

When used within its limits and properly focused, SPAN is a valuable resource to managers and monitoring systems. However, for 100% guaranteed views of network traffic, passive network TAPs are a necessity for meeting many of today’s access requirements as we approach larger deployments of 10 Gigabit and up. It’s in this realm that SPAN access limitations become more of an issue.

SPANs vs. TAPs

Until the early 1990s, using a TAP or test access port from a switch patch panel was the only way to monitor a communications link. Most links were WAN so an adaptor like the V.35 adaptor from Network General or an access balum for a LAN was the only way to access a network. In fact, most LAN analyzers had to join the network to really monitor it.

As switches and routers developed, along came a technology we call a SPAN port or mirroring port; and with this monitoring was off and running. SPAN generally stands for Switch Port for Analysis and was a great way to effortlessly and non-intrusively acquire data for analysis. By definition, a SPAN Port usually indicates the ability to copy traffic from any or all data ports to a single unused port but also usually disallows bidirectional traffic on that port to protect against backflow of traffic into the network.

Analyzers and monitors no longer had to be connected to the network. Engineers could use the SPAN (mirror) port and direct packets from their switch or router to the test device for analysis.

Is a SPAN port a passive technology? No!

Some call a SPAN port a passive data access solution – but passive means “having no effect” and spanning (mirroring) does have measurable effect on the data. Let’s look at the facts.

  • Spanning or mirroring changes the timing of the frame interaction (what you see is not what you get)
  • The spanning algorithm is not designed to be the primary focus or the main function of the device, like switching or routing, so the first priority is not spanning and if replicating a frame becomes an issue, the hardware will temporally drop the SPAN process.
  • If the speed of the SPAN port becomes overloaded, frames are dropped.
  • Proper spanning requires that a network engineer configure the switches properly and this takes away from the more important tasks required by network engineers. Many times configurations can become a political issue (constantly creating contention between the IT team, the security team and the compliance team).
  • The SPAN port drops all packets that are corrupt or those that are below the minimum size, so all frames are not passed on. All of these events can occur and no notification is sent to the user; so there is no guarantee that one will get all the data required for proper analysis.

In summary, the fact that SPAN ports are not a truly passive data access technology, or even entirely non-intrusive, can be a problem for data security compliance monitoring or lawful intercept. Since there is no guarantee of absolute fidelity, it is possible or even likely that evidence gathered by this monitoring process will be challenged in the court of law.

Are SPAN ports a scalable technology? No!

When we had only 10Mbps links and a robust switch (like ones from Cisco), one could almost guarantee they could see every packet going through the switch. With 10Mbps fully loaded at around 50% to 60% of the maximum bandwidth, the switch backplane could easily replicate every frame. Even with 100Mbps one could be somewhat successful at acquiring all the frames for analysis and monitoring, and if a frame or two here and there were lost, it was no big problem.

This has all changed with Gigabit and 10 Gigabit technologies, starting with the fact that maximum bandwidth is now twice the base bandwidth – so a Full Duplex (FDX) Gigabit link is now 2 Gigabits of data and a 10 Gigabit FDX link is now 20 Gigabits of potential data.

No switch or router can handle replicating/mirroring all this data, plus handle its primary job of switching and routing. It is difficult if not impossible to pass all frames (good and bad one) including FDX traffic at a full-time rate, in real time at non-blocking speeds.

Adding to this FDX need, we must also consider the VLAN complexity and finding the origin of a problem once the frames have been analyzed and a problem detected.

From Cisco’s own white paper, “On SPAN Port Usability and Using the SPAN Port for LAN Analysis,” the company warns “the switch treats SPAN data with a lower priority than regular port-to-port data.” In other words, if any resource under load must choose between passing normal traffic and SPAN data, the SPAN loses and the mirrored frames are arbitrarily discarded. This rule applies to preserving network traffic in any situation. For instance, when transporting remote SPAN (RSPAN) traffic through an Inter Switch Link (ISL), which shares the ISL bandwidth with regular network traffic, the network traffic takes priority. If there is not enough capacity for the remote SPAN traffic, the switch drops it. Knowing that the SPAN port arbitrarily drops traffic under specific load conditions, what strategy should users adopt so as not to miss frames? According to Cisco, “the best strategy is to make decisions based on the traffic levels of the configuration and when in doubt to use the SPAN port only for relatively low-throughput situations.”

Hubs? How about them?

Hubs can be used for 10/100 access but they have several issues that one needs to consider. Hubs are really half duplex devices and only allow one side of the traffic to be seen at a time. This effectively reduces the access to 50% of the data.

The half duplex issue often leads to collisions when both sides of the network try to talk at the same time. Collision loss is not reported in any way and the analyzer or monitor does not see the data. The big problem is if a hub goes down or fails, the link it is on is lost. As such, hubs no longer fit as an acceptable, reliable access technology and do not support Gigabit or above access and should not be considered.

Today’s “REAL” Data Access Requirements

To add more complexity and challenges to SPAN port as a data access technology, consider the following:

  • We have entered a much higher utilization environment with many times more frames in the network.
  • We have moved from 10Mbps to 10Gbps Full Duplex – today many have even higher rates of 40 and 100Gbps.
  • We have entered into the era of data security, legal compliance and lawful intercept, which require that we monitor all of the data and not just “sample” the data – with the exception of certain very focused monitoring technologies (e.g., application performance monitoring)

These demands will continue to grow, as we have become a very digitally focused society. With the advent of VoIP and digital video we now have revenue-generating data that is connection-oriented and sensitive to bandwidth, loss and delay. The older methods need reviewing and the aforementioned added complexity requires that we change some of the old habits to allow for “real” 100% Full Duplex real-time access to the critical data.

In summary, being able to provide “real” access is not only important for data compliance audits and lawful intercept events; it is the law. Keeping our bosses out of jail has become very high priority these days; but I guess it depends on how much you like your boss.

When is SPAN port methodology “OK”?

Many monitoring products can and do successfully use SPAN as an access technology. These are effective for low-bandwidth application layer events like conversation analysis, application flows and connection information, and for access to reports from call managers, etc., where time based or frame flow analysis is not needed.

These monitoring requirements utilize a small amount of bandwidth and grooming does not affect the quality of the reports and statistics. The reason for their success is that they keep within the parameters and capability of the SPAN port and do not need every frame for successful reporting and analysis. In other words, a SPAN port is a very usable technology if used correctly and, for the most part, the companies that use mirroring or SPAN are using it in well-managed and tested methodologies.


Spanning (mirroring) technology is still viable for some limited situations, but as one migrates to FDX Gigabit and 10 Gigabit networks, and with the demands of seeing all frames for data security, compliance and lawful intercept, one must use “real” access TAP technology to fulfill the demands of today’s complex analysis and monitoring technologies. With today’s large bandwidths the TAP should feed an advanced and proactive filtering technology for the clearest of view!

If the technology demands are not enough, network engineers can focus their infrastructure equipment on switching and routing and not spend their valuable resources and time setting up SPAN ports or rerouting data access.

In summary, the advantages of TAPs compared to SPAN/mirror ports are:

  • TAPs do not alter the time relationships of frames – spacing and response times are especially important with RTPs like VoIP and Triple Play analysis including FDX analysis.
  • TAPs do not introduce any additional jitter or distortion nor do they groom the flow, which is very important in all real-time flows like VoIP/video analysis.
  • VLAN tags are not normally passed through the SPAN port so this can lead to false issues detected and difficulty in finding VLAN issues.
  • TAPs do not groom data nor filter out physical layer errored packets.
  • Short or large frames are not filtered/dropped.
  • Bad CRC frames are not filtered.
  • TAPs do not drop packets regardless of the bandwidth.
  • TAPs are not addressable network devices and therefore cannot be hacked.
  • TAPs have no setups or command line issues so getting all the data is assured and saves users time.
  • TAPs are completely passive and do not cause any distortion even on FDX and full bandwidth networks.
  • TAPs do not care if the traffic is IPv4 or IPv6; it passes all traffic through.

So should you use a TAP to gain access to your network frames? Now you know the differences, and it is up to you to decide based on your goals!

The four main Types of TAPs – provided by Garland Technologies are:

Breakout TAPs are the simplest type of TAP. In their most basic form they have four ports. The network traffic travelling in one direction comes in port A and is sent back out port B unimpeded. Traffic coming from the other direction arrives in port B and is sent back out port A, also unimpeded. The network segment does not “see” the TAP. At the same time the TAP sends a copy of all the traffic to monitoring ports C & D of the TAP. Traffic travelling from A to B in the network is sent to one monitoring port and the traffic from B to A is sent out the other, both going to the attached tool.

IMPORTANT: Make sure the TAP incorporates a failsafe feature. This will ensure that if the TAP were to lose power or fail, the network will not be brought down as a result.

Aggregating TAPs provide the ability to take network traffic from multiple network segments and aggregate, or link bond, all of the data to one monitoring port. This is important because you can now use just one monitoring tool to see all of your network traffic. With the addition of filtering capability in the TAP you can further enhance your tools efficiency by only sending the data it needs to see.

Regeneration TAPs facilitate taking traffic from a single network segment and sending it to multiple ports. This allows you to take traffic from just one point in the network and send it to multiple tools. Therefore different teams in your company like security, compliance, or network troubleshooting can see all the data at the same time for their own requirements. This leads to no team contention over available network monitoring point availability.

Bypass TAPs allow you to place network devices like IPS/IDS, data leakage prevention (DLP), firewall, content filtering and security devices, that need to be installed inline, into the network while removing the risk of introducing a point of failure. With a bypass TAP, failure of the inline device, reboots, upgrades, or even removal and replacement of the device can be accomplished without taking down the network. In applications requiring inline tools, bypass TAPs save time, money and network downtime.

In the next part I will review and compare VACLs, RSPAN and Cloud TAPs.

Want more on TAPs, SPAN ports, even comparative tests and Sharkfest classes? Visit

To read an excellent paper on Full Duplex TAP basics go to:

Here’s a little bit about Tim:

Tim O’Neill – The “Oldcommguy™”
Technology Website –
Committee Chairman for Cyber Law Enforcement training and Cyber Terrorism
For Georgia State Senator John Albers
Please honor and support our Troops, Law Enforcement and First Responders!
All Gave Some – Some Gave All – All deserve our Respect and Support!

Thanks to Ixia for the article. 

SDN and Network Management: Infosim® StableNet® Current State of Affairs and Roadmap

Dr. David Hock, Senior Consultant R&D, Infosim®, discusses how Infosim® StableNet® is integrating SDN and Network Management to increase the benefits for their customers.

Recently, Software Defined Networking (SDN) has become a very popular term in the area of communication networks. The paradigm shift introduced by SDN is a promising enabler for many use cases. However, it also opens up a lot of new challenges for Network Management Software Providers. Infosim® is working together with customers, SDN experts, and academic researchers to extend network management systems to cover SDN.

Software Defined Networking (SDN)

The key idea of SDN is to introduce a separation of the control plane and the data plane of a communication network. The control plane is removed from the normal network elements into typically centralized control components. The normal elements can be replaced by simpler and therefore cheaper off-the-shelf devices that are only taking care of the data plane, i.e. forwarding traffic according to rules introduced by the control unit. Today’s most popular realization of SDN is OpenFlow developed by the Stanford University around 2008.

The approach of a centralized control plane brings several benefits, including, among others, reduced investment costs due to cheaper network elements, and a better programmability due to a centralized control unit and standardized vendor-independent interfaces, as indicated in Figure 2. In particular, SDN is also one of the key enablers to realize network virtualization approaches which enable companies to provide application-aware networks and simplify cloud network setups.

However, despite of the benefits it brings, SDN also opens up new challenges. One of these challenges, particularly in the interest of Infosim® as one of the leading companies in the Network Management area, is how to integrate SDN into a traditional Network Management System (NMS). Typical parts of an NMS, such as configuration and monitoring, need to be revised and adapted to include the technologies of SDN.

Necessary extensions to include SDN in Network Management Systems (NMS)

Two main tasks of a Network Management System are configuration and monitoring. Figure 1 illustrates a state-of-the-art network managed by StableNet®. Regarding the monitoring, the NMS asks for configuration and performance information via standardized protocols, such as SNMP, WMI, IP-SLA, or Netflow. The configuration of different network entities is usually done in a centralized way using the unified Infosim® StableNet® interface. However, this includes a lot of different backend proprietary protocols which a network management vendor has to support and maintain.

Infosim StableNet- State-of-the-art network management

In non-SDN environments, the network configuration is separated from the network control. The centralized control approach introduced by SDN, however, enables new possibilities to integrate network management functions into the network control. That way new use cases are possible:

(1) With integrating NMS information into the network control, e.g. legacy network, information can be made available to an SDN controller that can then be used to support routing decisions.

(2) With integrating SDN information into an NMS, the functionality of the NMS can be increased, including e.g. the discovery of an SDN topology or new passive monitoring approaches enabled by SDN.

Integrating SDN and StableNet®

SDN is becoming more and more popular and a steadily rising number of SDN devices can already be bought. Therefore, we expect that for many of the Infosim® customers the importance of SDN will heavily increase in the next years. Having this in mind, Infosim® is aiming at being one step ahead of the crowd and integrating SDN and StableNet® already now.

New mechanisms are needed and already implemented by Infosim® that allow the configuration of SDN-based networks. The challenge here is that SDN networks are still in development and you need an agile development approach to cope with the changes. Requirements to these mechanisms include device management, bootstrapping, operational configuration, security, the coverage of mixed environments, and many others.

Infosim- Concept of integrating SDN and StableNet

Figure 2 illustrates how an integration of SDN and Infosim® StableNet® is realized. In Infosim®‘s concept of an SDN and Infosim® StableNet® integration, a bidirectional communication of Infosim® StableNet® and SDN controller is possible. This way both, the NMS and the SDN world, can profit from each other by having access to additional information that would not be available elsewise.

One of the provided extensions is integrating the configuration of SDN controllers, e.g. OpenDaylight, including e.g. the definition of flow table rules, directly in StableNet®. Another extension is the inclusion of performance counters to monitor SDN flows. The actual standards do not offer this capabilities directly and you need an extension that is provided by Infosim®. Figure 3 shows example screenshots of the described extensions.

Figure 3: Example screenshots of the Infosim® StableNet® SDN module

(a) Integrated configuration approach

Infosim StableNet- Integrated configuration approach

(b) Monitoring approach using OpenFlow Statistics and an own controller module to communicate with Infosim® StableNet®

Monitoring approach using OpenFlow Statistics and an own controller module to communicate with Infosim StableNet

Ongoing SDN extensions to StableNet®

We are continuously fulfilling our efforts to map the current state of technology into StableNet®. Together with different SDN experts and academic researches, Infosim® is also taking part in bleeding edge projects to look at the SDN-development and future SDN use cases to extend Infosim® StableNet® by promising features enabled by SDN.

One key driver to successfully implement SDN networks is the availability of a service catalogue. The idea of a service catalogue is to provide a holistic view on any service in a network including all involved entities, such as network components, servers, and user devices. An example is illustrated in Figure 4.

Figure 4 Infosim® StableNet® Service Catalogue to provide a holistic view on services including all of the involved components and devices

Infosim® StableNet® Service Catalogue to provide a holistic view on services including all of the involved components and devices

Different colors indicate various services currently running in the network as well as the entities involved in these services. Offering a holistic, aggregated view on these services enables the generation of a service matrix that enormously facilitates different NMS tasks, including, e.g., configuration or SLA monitoring.

In a time where green computing is more and more discussed, the concept of network virtualization makes it very appealing to subsequently replace physical hardware devices by virtual software instances. SDN is a very promising enabler for such an approach. Another popular term that is frequently named in this context is Network Functions Virtualization (NFV), where certain network functions are virtualized and provided on commodity hardware. Often, a large economy of scales can be reached by aggregating different services as virtually separated instances on a single physical infrastructure. However, this approach also brings new security implications regarding the isolation of different services in a virtualized network infrastructure. Some of these security issues are currently targeted by Infosim®:

(1) In the age of smartphones and tablets, it is more and more common that employees of a company bring and use their own devices in the company’s network. If a physical separation of the company’s production network and the network where the mobile devices are connected to is not feasible, network virtualization is a promising alternative to separate the different types of traffic in the network. However, to guarantee the security of sensitive data in the company’s network, the isolation of the virtualized networks has to be guaranteed.

(2) A similar challenge arises when various services with different security requirements are run as virtual instances in a single physical network. Again isolation has to be guaranteed to assure that no security SLAs are violated.

Infosim® StableNet® is already extended with SDN capabilities. We are continuously working to integrate new SDN technology into our product such that a smooth transition to SDN or a mixed operation of SDN and non-SDN networks is possible.

About StableNet®

The Infosim® StableNet® unified management solution provides a complete all-embracing multi-functional management wrap around your entire infrastructure enabling consistent End-to-End management, resulting in faster resolution (MTTR), smarter operational management through lower operating costs, seamless support, flexibility in scaling and provisioning a changing environment to meet new business products and developments, providing for a great customer experience and service differentiation.

The Infosim® StableNet® solution is a flexible service provisioning and service assurance multifunctional platform that provides customers with a much broader range of capabilities that include:

Thanks to InterComms for the article.

Rogers buys Source Cable for USD142m

Source Cable, the last remaining independent cable TV network operator in Hamilton, Ontario, has been sold to national giant Rogers Communications for CAD160 million (USD142 million) in a deal expected to close this quarter, reports The Hamilton Spectator. Source Cable provides TV, internet and telephony to around 26,000 homes with 43,000 revenue-generating units (RGUs), while Rogers stated that the purchase means the cable giant will have about 60,000 customers in Hamilton. The paper notes that going forward Hamilton’s cable subscribers will be served by Cogeco and Rogers, co-owners of Cable 14.

Thanks to TeleGeography for the article.

Big Data Monitoring

Robust Monitoring to Meet Big Data Challenges

Unstructured data accounts for as much as 80 percent of most companies’ total data volume. This “Big Data” has customarily taken too long and cost too much to process and analyze. Now, however, emerging Big Data initiatives stand to transform these vast, untapped resources from a costly storage challenge into vital business intelligence for marketing, product development, stock trading, genetic research, and more.

Big Data promises quantum gains in productivity and competitive advantage. These gains are achieved by distributing the massive task of preparing data for analysis among large numbers of servers. To make it all work, IT departments need greater visibility into networks and applications in order to prioritize, filter, and synthesize information. Predictably, monitoring the performance of the network, applications, and security becomes more challenging as Big Data projects scale.

Ixia’s network visibility solutions deliver the actionable insight needed to harness the power of Big Data, while reducing the effort required. With the monitoring of applications, networks, and security growing in both importance and complexity, Ixia’s network monitoring switches introduce vital intelligence between data center production networks and monitoring tools, while performing the sophisticated filtering, aggregation, and load balancing needed to keep pace with the challenges of rising data volumes.

Based on industry-acclaimed innovation, Ixia’s Big Data solutions:

  • Enable connection of numerous monitoring tools to high-capacity networks for improved visibility into the performance of highly distributed applications
  • Decouple and optimize the flow of data between network monitoring points and monitoring tools, enabling data to be shared, filtered, de-duplicated and directed more efficiently
  • Aggregate and filter data from across the network to assure each monitoring solution receives the exact data it needs
  • Introduce drag-and-drop control panel for ease of management and configuration
  • Feature automation that facilitates processing of massive quantities of data unattended

Now Ixia delivers the robust visibility that makes Big Data initiatives cost-effective and manageable. Our sophisticated switching capabilities enable efficient, intelligent monitoring of network and application performance, helping to optimize security, scalability, and compliance.

Thanks to Ixia for the article

Rogers’ Revenue Inches Up 1%, Although Cable Declines 1%

Canadian quadruple-play operator Rogers Communications’ consolidated revenue increased 1% in the third quarter of 2014 to CAD3.252 billion (USD2.896 billion), reflecting revenue growth of 2% year-on-year in mobile operations and 3% in Business Solutions, while revenue in the group’s Media division was steady, and was partially offset by a decline of 1% in cable operations (TV, broadband and fixed telephony). Mobile revenue was boosted by higher equipment sales and moderate growth in service revenue, but cable revenue decreased as a result of TV subscriber losses over the past year, partially offset by continued internet revenue growth and the impact of pricing changes. Supporting its ongoing wireless sales growth, Rogers activated a gross total of 614,000 smartphones on its network in the three months to the end of September 2014, of which 31% were new subscribers, and reported that smartphone customers now represent 77% of all its post-paid wireless subscribers. In 4G LTE developments, Rogers reported that in Q3 it deployed its recently acquired 700MHz spectrum to expand and upgrade mobile broadband services in rural and urban communities in the provinces of Ontario, British Columbia, Alberta, Quebec and New Brunswick (having initially switched on commercial 700MHz frequencies in selected Ontario, British Columbia and Alberta locations).

Thanks to TeleGeography for the article. 

Ottawa Regional Contact Centre Association Presents: Social Media Forum

Ottawa Regional Contact Centre Association Presents: Social Media Forum

Ottawa Regional Contact Centre Association Presents:

Social Media Forum

November 13, 2014: Nepean Sailing Club, 3259 Carling Ave, Ottawa

8:30 – 11:30am

Social media has enabled you to connect with your customers like never before. Businesses of all sizes recognize that social media is a vital channel and want to optimize their presence to promote their company’s products and services across all channels to reinforce the overall customer experience. Have you recently considered how to:

  • Connect effectively with your customers through social media
  • Manage your online community and create a better customer experience
  • Measure, analyze and improve your results
  • Optimize your service channels
  • Create relevant and engaging content

Learn and share with others. Discuss emerging practices, collect new ideas and explore innovative approaches together.

Parking: Onsite public parking
Time: 8:30 am to 9am. – Registration and Networking
9:00 to 11:30am – Introduction and forum discussion
Cost: Free to ORCCA member
Non-Members – $30.00 in advance payable by Visa or MasterCard
RSVP to Register early, space is limited.


Infosim® Global Webinar Day, October 30th, 2014- Stop flying blind through cloud services!

Join Peter Moessbauer, Strategic Alliances Manager for a Webinar on how to monitor and ensure cloud services quality with Infosim® StableNet®.

This Webinar will provide insight into:

  • How to deploy a cost-efficient distributed monitoring solution from the cloud user´s perspective.
  • How to leverage our new StableNet® Agents deployed on a Raspberry / Banana Pi type platform to:
    • Monitor services infrastructure availability and performance from the user’s perspective.
    • Run active End-to-End tests for VoIP and video services from customer sites.
    • Proactively spot service outages and degradations before your customers will notice them.
  • How to roll out monitoring and service assurance management while keeping pace with your cloud services offerings.

Register today and reserve your seat in the desired timezone:

AMERICAS, Thursday, October 30th, 4:00 pm – 4:30 pm EDT (GMT-4)
EUROPE, Thursday, October 30th, 3:00 pm – 3:30 pm CET (GMT+1)
APAC, Thursday, October 30th, 4:00 pm – 4:30 pm SGT (GMT+8)

A recording of this webinar will be available for all who register!

For more information, check out these resources

Infosim’s Five Steps to Building Total Visibility and Control of your Cloud Infrastructure using StableNet


StableNet® – WHITE PAPER Managing End-­to-­End VoIP Networks

Thanks to Infosim for the article. 



Aligning IT with Business via Performance Management

Much of the discussion around the Observer Platform 17 release has focused on how the designs of the new user interface (UI) and other enhancements will assist network and operations teams to more easily manage service and application performance.

This performance data and analysis isn’t just of value to IT but to the overall business. The challenge for performance management solutions has been providing this intelligence in a way that can be easily accessed and understood by other IT and business teams. The Observer Platform 17 both expands useful analysis available to business groups and makes it easier to use the data with systems familiar to these groups.

Enhancement: Expanding Web Service Analytics

  • Benefit: Strengthens visibility into how users consume company web resources, specifically as it relates to a web-based app’s device parameters like OS, mobile and desktop platform details, and browser type.
  • Business Value: Knowing not just “what” but “how” customers are accessing data is pivotal to optimizing web content and quantifying the effectiveness of customer-facing web interactions.
  • In Practice Example: For the marketing team launching web initiatives, these metrics provide details on how visitors are accessing the website, and enhance their understanding of the user experience by providing response-time and error metrics. Additionally, when network-based problems occur that impact marketing web programs, they can be resolved by the network team which has access to the packets.

JDSU Network Instruments Observer 17 Platform

Enhancement: Third-Party System Integration via RESTful APIs

  • Benefit: Simplifies sharing of performance data with other groups. RESTful APIs are a programming interface that utilizes HTTP requests like GET, PUT, POST and DELETE. Using this universal access method enables any solution to connect to the Observer Platform to access data or even manage the solution remotely.
  • Business Value: Other teams in an organization can interact and view performance data and analysis from the Observer Platform from the tools and workflows that they use on a daily basis. This allows them to proactively track performance of critical business systems, and view these metrics alongside business metrics.
  • In Practice Example: A support staff for a retail chain could integrate the Observer Platform into their helpdesk system via Apex’s RESTful API to monitor points of sale (PoS) on their network. The Observer Platform could instantly alert the service desk of an anomaly or system condition that could soon negatively impact users. The early alerts, performance analysis, and access to packets allow the staff to take proactive steps to remediate the issue before it impacts the PoS and customers.

JDSU Network Instruments Observer Apex

With IT playing a key role in helping businesses to develop competitive advantages and nimbly respond to changing markets, it’s critical that network teams can facilitate the sharing of performance intelligence. This also allows IT and business teams to evaluate the success of business operations and initiatives. The new features of the Observer Platform 17 mark a significant step forward in enabling the network team and IT to more closely align with business processes and goals.

Thanks to Network Instruments for the article. 

Ixia’s new Ebook- The Network Through a New Lens: How a Visibility Architecture Sharpens the View

“Enter the Visibility Architecture”

“Buying more tools to deal with spiraling demands is counter-productive – it’s like trying to simplify a problem by increasing complexity. Visibility merits its own architecture, capable of addressing packet access and packet stream management. A visibility architecture that collects, manages, and distributes packet streams for monitoring and analysis is ideal for cost-savings, reliability, and resilience. The economic advantages of such end to-end visibility are beyond debate.

An architectural approach to visibility allows IT to respond to the immediate and long-range demands of growth, management, access, control, and cost issues. This architecture can optimize the performance and value of tools already in place, without incurring major capital and operational costs. With the ability to see into applications, a team can drill down instantly from high-level metrics to granular details, pinpoint root causes and take action at the first—or even before the first – sign of trouble – lowering Mean Time to Repair (MTTR) dramatically.

A scalable visibility architecture provides resilience and control without adding complexity. Because lack of access is a major factor in creating blind spots, a visibility architecture provides ample access for monitoring and security tools: network taps offer reliable access points, while NPBs contribute the advanced filtering, aggregation, deduplication, and other functions that make sure these tools see only traffic of interest.

Application- and session-aware capabilities contribute higher intelligence and analytical capabilities to the architecture, while policy and element management capabilities help automate processes and integrate with existing management systems. Packet-based monitoring and analysis offers the best view into the activity, health, and performance of the infrastructure. Managing a visibility architecture requires an intuitive visual/ graphical interface that is easy to use and provides prompt feedback on operations – otherwise, architecture can become just another complexity to deal with.”

Ixia Visibility Architecture

The Ixia Network Visibility Architecture encompasses network and virtual taps, as well as inline bypass switches; inline and out-of-band NPBs; application-aware and session aware monitoring, and a management layer.

Download the ebook here

Ixia The Network Through a New Lens

Thanks to Network World for the article. 

VelaSync™ High Speed Time Server

Spectracom VealSync High Speed Time ServerHigh Performance NTP Server, PTP Grandmaster and Network Sync Monitor

VelaSync™ high speed time server with TimeKeeper™ inside is a network appliance designed for high frequency trading and other low-latency network applications. The combination of FSMLab’s TimeKeeper’s highly optimized timing protocols and management functions, Spectracom’s precision GPS timing technology, and the flexibility of commodity hardware offers exceptional performance and keeps pace with the needs of evolving network infrastructure. The server offers multiple 1GbE (RJ-45) and 10 GbE (SFP+) network ports for set-up, management, and simultaneous NTP and PTP server/grandmaster capability.

Flexible Configuration Provides Reliable, Secure Time

TimeKeeper’s web-based user interface simplifies configuration of multiple time sources for resiliency against GPS attacks, spoofing or jamming, and equipment failures. For example, the server can be easily setup to use a PTP source as a backup to the on-board GPS and use an NTP source as a cross check. The servers can be setup to back up each other so that if one fails, the time service continues. It includes redundant hot-swap power supplies.

Network Sync Monitoring in a Single At-a-glance Instance

A unique aspect of TimeKeeper is the ability to auto-discover and monitor your network’s synchronization topology. From a single-pane-of-glass, see where the server time is going, monitor downstream clients, and discover other available time sources. The benefit is to verify redundancies and failover options, and identify single points of failure and “choke points”. See everything related to time sync across the enterprise.

Spectracom Velasync Deployment

Thanks to Spectracom for the article.