Why SNMP Monitoring is Crucial for your Enterprise

What is SNMP? Why should we use it? These are all common questions people ask when deciding if its the right feature for them, the answers to these questions are simple.

Why SNMP Monitoring is Crucial for your EnterpriseSimple Network Management Protocol is an “internet-standard protocol for managing devices on IP netowrks”. Devices that typically support this solution include routers, switches, servers, workstations, printers, modem racks and more.

Key functions

  • Collects data about its local environment.
  • Stores and retrieves administration information as defined in the MIB.
  • Signals an event to the manager.
  • Acts as a proxy for some non–SNMP manageable network device.

It typicaly uses, one or more administrative computers, called managers, which have the task of monitoring or managing a group of hosts/devices on a computer network.

Each SNMP Monitoring tool provides valuable insight to any network administrator who requires complete visibility into the network, and it acts as a primary component of a complete management solution information via SNMP to the manager.

The specific agents uncover data on the managed systems as variables. The protocol also permits active management tasks, such as modifying and applying a new configuration through remote modification of these variables.

Companies such as Paessler & Manage engine have been providing customers with reliable SNMP for years, and its obvious why.

Why use it?

It delivers information in a common, non-proprietary manner, making it easy for an administrator to manage devices from different vendors using the same tools and interface.

Its power is in the fact that it is a standard: one SNMP-compliant management station can communicate with agents from multiple vendors, and do so simultaneously.

Another advantage of the application is in the type of data that can be acquired. For example, when using a protocol analyzer to monitor network traffic from a switch’s SPAN or mirror port, physical layer errors are invisible. This is because switches do not forward error packets to either the original destination port or to the analysis port.

However, the switch maintains a count of the discarded error frames and this counter can be retrieved via a simple network management protocol query.


When selecting a solution like this, choose a solution that delivers full network coverage for multi-vendor hardware networks including a console for the devices anywhere on your LAN or WAN.

If you want additional information download our free whitepaper below.

NMSaaS- Top 10 Reasons to Consider a SaaS Based Solution

Thanks to NMSaaS for the article. 


Bell Reaches 120 More LTE Communities

Bell Canada yesterday announced that 120 more small communities across Quebec and Ontario have joined Bell Mobility’s 4G LTE mobile broadband network. Employing new 700MHz bandwidth (which first went commercial last April) and other spectrum assets, Bell is expanding LTE services to small towns, rural communities and remote locations in every region of Canada including the North. Bell earlier announced the launch of 4G LTE service in 52 small communities across all four provinces in Atlantic Canada and will make further LTE rollout announcements in 2015. Bell’s LTE footprint already covers 86% of the national population and the company plans to bring the network to more than 98% of Canadians this year with its ongoing expansion to smaller communities.

Thanks to TeleGeography for the article.

The Ultimate Guide to the Time Zone Clock – Part 1

Sapling's Time Zone Clocks

Welcome to the Ultimate Guide to Sapling’s Time Zone Clock! Don’t know what a Time Zone Clock is or how it can help your facility? We’ve created this guide in order to introduce you to the Time Zone Clock, how it works and how it can help your facility. We split this post into a few parts so we can really dive into the Zone Clock, really get to know the ins and outs of this product.

If you have any questions along the way, please leave us a comment or drop us a line on our Contact Page. Ok, let’s get started.

What exactly is a Time Zone Clock?

A Time Zone Clock is a device, consisting of either 5 or 7 clocks on one line, displaying the accurate time in your current location while also showing time zones all around the world.
Ideal for conference centers, meeting rooms, transportation facilities and many other applications, the Time Zone Clock encompasses the main benefits of a Sapling synchronized clock system – all clocks will consistently display the accurate time in the assigned time zone and automatically update for Daylight Saving Time (where observed) without the users ever having to worry about time drift.
Installing the Time Zone Clock

The Time Zone Clock can be installed in two ways:

  • As a standalone system – If your facility does not currently have a synchronized clock system, don’t worry, there is still the option to purchase a Time Zone Clock. The Zone Clock does not need to be a part of an existing Sapling clock system in order to operate. As a standalone system, each clock will be IP based and powered by a PoE Injector or a PoE Switch. These is no need for a dedicated master clock in this case. Each clock connects to your facility’s network through a CAT5 or CAT6 cable and has a built-in web interface, which allows a user to program the settings of each clock. Programmable settings include *brightness level, *12/24 hour mode, DST configuration, GMT offset and much more. (*Digital clocks only)
  • Integrated within an existing system – If a facility has an existing Sapling synchronized clock system, we have great news, the Time Zone Clock can easily integrate within that system! Whether your facility currently has a Wireless, Wired, IP or TalkBack System, the Time Zone Clock can easily integrate within the system.

This is just the beginning of the Ultimate Guide to Sapling’s Zone Clock. Stay tuned to the blog for Part 2 and if you have any questions, please don’t hesitate to contact us!

Thanks to Sapling for the article.

Ixia Extends Visibility Architecture with Native OpenFlow Integration

Network Visibility SolutionsIxia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced an update to its ControlTower distributed network visibility platform that includes support for OpenFlow enabled switches from industry leading manufacturers. ControlTower OpenFlow support has at present been interoperability tested with Arista, Dell and HP OpenFlow enabled switches.

“Dell is a leading advocate for standards such as Openflow on our switching platforms to enable rich and innovative networking applications,” said Arpit Joshipura, Vice President, Dell Networking. “With Ixia choosing to support our Dell Networking switches within its ControlTower management framework, Dell can extend cost-effective visibility and our world-class services to our enterprise customers.”

Ixia’s enhanced ControlTower platform takes a unique open-standards based approach to significantly increase scale and flexibility for network visibility deployments. The new integration makes ControlTower the most extensible visibility solution on the market. This allows customers to leverage SDN and seamlessly layer the sophisticated management and advanced processing features of Ixia’s Net Tool Optimizer® (NTO) family of solutions on top of the flexibility and baseline feature set provided by OpenFlow switches.

“Data centers benefit from the power and flexibility that OpenFlow switches can provide but cannot afford to lose network visibility,” said Shamus McGillicuddy, Senior Analyst, Network Management at Enterprise Management Associates. “However organizations can use these same SDN-enabled switches with a visibility architecture to ensure that their existing monitoring and performance management tools can maintain visibility.”

Key highlights of the expanded visibility architecture include:

  • Ease of use, advanced processing functions and single pane of glass configuration through Ixia’s NTO user interface and purpose-built hardware
  • Full programmability and automation control using RESTful APIs
  • Patented automatic filter compiler engine for hassle-free visibility
  • Architectural support for line speeds from 1Gbps to 100Gbps in a highly scalable design
  • Open, standards-based integration with the flexibility to use a variety of OpenFlow enabled hardware and virtual switch platforms
  • Dynamic repartitioning of switch ports between production switching and visibility enablement to optimize infrastructure utilization

“This next-generation ControlTower delivers solutions that leverage open standards to pair Ixia’s field-proven visibility architecture with best of breed switching, monitoring and security platforms,” added Deepesh Arora, Vice President of Product Management at Ixia. These solutions will provide our customers the flexibility needed to access, aggregate and manage their business-critical networks for the highest levels of application performance and security resilience.”

About Ixia’s Visibility Architecture

Ixia’s Visibility Architecture helps companies achieve end-to-end visibility and security in their physical and virtual networks by providing their tools with access to any point in the network. Regardless of network scale or management needs, Ixia’s Visibility Architecture delivers the control and simplicity necessary to improve the usefulness of these tools.

Thanks to Ixia for the article.

Telus Automatically Charging Users for 50GB if Exceeding Data Allowance

In a notice on its website Canadian full-service telecoms operator Telus discloses that from 30 March 2015 it will automatically charge residential fixed internet customers who exceed their monthly internet data allowance. From that date, a standard additional 50GB data ‘bucket’ will be automatically billed to the user if exceeding their threshold, with the first bucket costing CAD5 (USD3.99) and each subsequent bucket charged at CAD10, up to a maximum monthly cost of CAD75. Canadian press noted that the charging model is similar to that currently levied by AT&T in the US, where customers must pay an automatic USD10 for each additional 50GB of data, although AT&T waives this charge for the first two instances that the user exceeds their monthly plan limit. Telus states: ‘Most Telus customers are already on an Internet plan that meets their current needs. Only those that exceed their plan – the heaviest Internet users – will incur an additional charge. If you are one of those customers, you will be notified before being charged.’

The telco explained further: ‘More than four out of five customers stay within their monthly data allowance. However, a small number of customers regularly exceed their monthly data allowance, and while Telus’ Internet plans have always had thresholds, historical consumer data usage patterns have not required us to apply any fees for those customers who exceed their allowance. As high speed networks have evolved, however, the consumption of video over the Internet has dramatically increased. As a result, in the last 16 months alone our customers’ monthly Internet data usage has more than doubled. Further, much of this consumption is being driven by a minority of our customers – in fact, less than 5% of our Internet customers are consuming 25% of the data on our network in any given month. This has required us to reconsider our approach to ensure we continue offering a smooth and seamless Internet experience for all customers. Accordingly, Telus will begin applying clear and simple usage-based Internet charges for customers who go over the data allowance in their home Internet data plans.’

Thanks to TeleGeography for the article.

JDSU’s Network Instruments Named a Leader in Gartner Magic Quadrant for Second Consecutive Year

Ranking Based on Completeness of Vision and Ability to Execute in Network Performance Monitoring and Diagnostics Market

MILPITAS, Calif., Feb. 23, 2015 – Network Instruments®, a JDSU Performance Management Solution (NASDAQ: JDSU), has been positioned as a Leader in the new Gartner® Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD). In the Gartner, Inc. report, Network Instruments is recognized as a network performance management leader in its category for completeness of vision and ability to execute. The Gartner Magic Quadrant is considered one of the tech industry’s most influential evaluations of enterprise network solutions.

Organizations, specifically network managers, require the visibility and insight NPMD solutions provide to enable staff to anticipate performance constraints before they impact service levels of critical initiatives as well as to guarantee user productivity. As the report notes, “At an estimated $1.1 billion, the NPMD market is a fast-growing segment of the larger network management space ($1.9 billion in 2013), and overlaps slightly with aspects of the application performance monitoring (APM) space ($2.4 billion in 2013).”1

“Measuring and reporting on the performance of the network is crucial to ensuring that performance is maintained at an acceptable level,” wrote Gartner analyst Vivek Bhalla in Technology Overview for Network Performance Monitoring and Diagnostics, September 2014. “NPMD tools are also needed to highlight opportunities to enhance business value for internal end users and external customers through improved application delivery. Finally, NPMD tools enable improved capacity management, thereby lowering capital investment for networking equipment and services.”

“We believe being named a leader by Gartner for both years of the NPMD Magic Quadrant validates our continued commitment to helping enterprise customers thrive in rapidly changing network and cloud environments,” said Bruce Clark vice president & general manager at JDSU. “Enterprise network teams rely on us to provide in-depth understanding and analytics of the user experience from the network, application, and cloud to speed up problem resolution and optimize performance.”

Network Instruments current NPMD solution is the Observer® Performance Management Platform consisting of Observer Apex™, Observer Management Server, Observer GigaStor™, Observer Analyzer, Observer Infrastructure, Observer Matrix™ and Observer nTAPs™ products.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1Gartner, Magic Quadrant for Network Performance Monitoring and Diagnostics, by Gartner Analysts Vivek Bhalla, Colin Fletcher and Gary Spivek, February 10, 2015.

Thanks to JDSU for the article. 

Voip Quality Test and End User Experience is a Necessity

Voip Quality Test and End User Experience is a NecessityVirtually all organizations today carry some level of Voice and video traffic on their data network. However, even though this technology has been available for over a decade, network engineers are still struggling to understand, predict and control how well this multimedia traffic will behave over their network.

Network engineers to this day are still struggling to understand, predict and monitor how well application traffic will behave over their network, especially to remote offices or users.

What network engineers need is a tool that can help them initially plan for new locations and links as well as an on-going monitoring solution which can alert them to QoS degradation as it is happening so they can get ahead of the issue before their users start complaining.

Luckily this tool now exists and it’s called ANT’S which stand for active network tester. The ANT has numerous capabilities which include:

1) It can discover, monitor and manage devices like FW’s, Switches and Servers at the remote site for a minimal cost.

2) It can also perform active VoIP quality test and Video tests across the user WAN or across the Internet to one of the 9 global NMSaaS test points.

4) The ANT can collect Netflow at the remote location for even further traffic analysis.

If you would like to find out more about these capabilities join our CTO John Olson, Director of Technical Services, will give an in depth overview of this capability. The NMSaaS NCCM Module is part of the industry’s first cloud-based highly scalable Network Management Solution. For the first time, organizations like yours can have best-of-breed network management delivered as a cloud based Software as a Service (SaaS) Solution.

Voip Quality Test and End User Experience is a Necessity- Free Webinar

Thanks to NMSaaS for the article. 

Top 5 Network Resolutions for 2015

With the approach of the New Year, water cooler conversation has turned to resolutions. Rather than talk about losing weight by abstaining from Grandma’s fruitcake, pass along these useful tips for ultimate network fitness in 2015.

1) Resolve to play better with the other IT teams. Share your packets

Given that every IT initiative runs on the network, it shouldn’t be a huge surprise that the value of a packet’s payload is not limited to the network team. Virtually every IT team from application developers to DevOps could fine-tune their operations with access to packet analysis.

In the past, the challenge was sharing this data with teams using different monitoring tools. With more tools utilizing web reports and APIs, it is now easier to share and integrate information between multiple tools and teams. Sharing is caring.

How to Prepare: Raise your profile in IT by raising the profile of the packet. As you’re storing packets, identify other teams that might have a need for packet intelligence. That application developer in the break room might be interested to know the network impact and chattiness of an internal app they’re coding.

Performance monitoring solutions like the Observer Platform can provide access to packets and simplify the sharing of analysis. Customizing web dashboards and integrating tools via the new RESTful API allows non-network IT teams to consume analysis derived from packets in ways that are meaningful to them.

2) Resolve to prepare for 40 Gb – Coming to a datacenter near you!

A perfect storm is taking place to drive 40 Gb adoption into the datacenter mainstream in 2015. Lower per-port pricing coupled with Cisco’s BiDirectional (BiDi) Optical Technology allows organizations to run 40 Gb traffic on existing 10 Gb cable infrastructure. This will make a compelling case for organizations looking to satisfy rapidly growing bandwidth demands. Most adoption conversations at this point are concerned with implementing 40 Gb links to connect between datacenters or critical core links.

How to Prepare: As you upgrade your network, be sure your monitoring tools can keep pace. When increasing network capacities, verify your monitoring infrastructure can interface with high-speed network connections. Another consideration to lighten the load on existing monitoring infrastructure would be to assess network packet brokers or aggregation switches to load balance large amounts of traffic across multiple monitoring appliances.

3) Resolve to get serious about Software Defined Networking

New technologies allowing network programmability are finally maturing to the point of being ready for mainstream enterprises to take under serious consideration. With Cisco and other vendors now actively offering a solution, most organizations will move beyond “researching” and into “planning.” SDN is not ready to do everything in your network, but it can help in a number of important ways to pave the path toward better operational integration and automation for IT services.

How to Prepare: This will require upgrades to network gear, at the very least, and may involve adding some new equipment too. In the case of SDN overlays and Network Functions Virtualization, it will be mostly software, but don’t forget that you need compute hardware to run the software. Further, the management picture is not nearly as fully formed as the enabling technology, and these new components must be folded into both planning and operations monitoring/troubleshooting systems and workflows. In 2015, EMA expects to see management tool vendors move from a “BYO API” approach to more pre-tested, out of the box support for SDN underlays, SDN overlays, and NFV technologies.

From: Jim Frey, VP of Research, Hybrid Cloud & Infrastructure Management, Enterprise Management Associates

4) Resolve to keep an open eye on the inside threat

While 2014 was undoubtedly the year of the external breach as highlighted by Home Depot and Sony Pictures, this next year will likely mark a shift to inside threats. As companies respond by strengthening perimeter defenses, it will be easier for criminals to circumvent these security measures by breaking into a company from the inside.

How to Prepare: Even if you don’t have the word “Security” in your title, the network team is the first line of defense against an inside threat. Question unexpected bursts of traffic or unusual behavior from a single client or internal device. Additionally, when thinking about access privileges, do you have a handle on who can access what assets? Does the network team give unlimited access to any file share? Key questions can minimize your exposure to internal threats.

From: Jeffrey Barbieri , Cyber Intelligence Investigator for Atrion Communications Resources

5) Resolve to effectively manage big performance data for better troubleshooting

Let’s face it, when we’re talking about performance monitoring in 2015, we’re talking about a big data operation–10 Gb pipes running at near line-rate capable of pushing through terabytes of packets in hours. In the dynamic, virtual world where servers and resources can be vMotioned from one physical location to another, and traffic can be captured and analyzed by different appliances, the emerging challenge for engineers isn’t just finding the needle in the haystack, but actually figuring out the location of the haystack.

How to Prepare: In these types of dynamic and distributed environments, it’s critical that your performance monitoring solution can scale. It must present an aggregated view of performance across all your datacenters and intuitively drill to the correct appliance regardless of the server or resource location at the time of the problem. Other functions critical for sifting through big performance data are logical workflows and robust search capabilities for quickly determining the location and scope of the problem, and performing root-cause analysis.

Thanks to Network Instruments for the article.

Network Device Backup is a Necessity with Increased Cyber Attacks

NMSaaS- Network Device backup is a necessity with increased cyber attacksIn the past few years cyber-attacks have become far more predominant with data, personal records and financial information stolen and sold on the black market in a matter of days. Major companies such as E-Bay, Domino’s, Montana Health Department and even the White House have fallen victim to cyber criminals.

Security Breach

The most recent scandal was Anthem, one of the country largest health insurers. They recently announced that there systems had been hacked into and over 80 million customer’s information had been stolen. This information ranged from social security numbers, email data, addresses and income material.

Systems Crashing

If hackers can break into your system they can take down your system. Back in 2012 Ulster banks systems crashed, it’s still unreported if it was a cyber-attack or not but regardless of the case there was a crisis. Ulster banks entre banking system went down, people couldn’t take money out, pay bills or even pay for food. As a result of their negligence they were forced to pay substantial fines.

This could have all been avoided if they had installed a proper Network Device Backup system.

Why choose a Network Device Backup system

If your system goes down you need to find the easiest and quickest way to get it back up and running, this means having an up-to-date network backup plan in place that enables you to quickly swap out the faulty device and restore the configuration from backup.

Techworld ran a survey and found that 33% of companies do not back up their network device configurations.

The reason why you should have a backup device configuration in place is as follows:

  • Disaster recovery and business continuity.
  • Network compliance.
  • Reduced downtime due to failed devices.
  • Quick reestablishment of device configs.

It’s evident that increased security is a necessity but even more important is backing up your system. If the crash of Ulster bank in 2012 is anything to go by we should all be backing up our systems. If you would like to learn more about this topic click below.

Telnet Networks- Contact UsThanks to NMSaaS for the article. 


Magic Quadrant for Network Performance Monitoring and Diagnostics

Network professionals support an increasing number of technologies and services. With adoption of SDN and network function virtualization, troubleshooting becomes more complex. Identify the right NPMD tools to detect application issues, identify root causes and perform capacity planning.

Market Definition/Description

Network performance monitoring and diagnostics (NPMD) enable network professionals to understand the impact of network behavior on application and infrastructure performance, and conversely, via network instrumentation. Other users and use cases exist, especially because these tools provide insight into the quality of the end-user experience. The goal of NPMD products is not only to monitor network components to facilitate outage and degradation resolution, but also to identify performance optimization opportunities. This is conducted via diagnostics, analytics and debugging capabilities to complement additional monitoring of today’s complex IT environments. At an estimated $1.1 billion, the NPMD market is a fast-growing segment of the larger network management space ($1.9 billion in 2013), and overlaps slightly with aspects of the application performance monitoring (APM) space ($2.4 billion in 2013).

Magic Quadrant

Magic Quadrant for Network Performance Monitoring and Diagnostics

Vendor Strengths and Cautions- Highlights


Ixia was founded in 1997, specializing in network testing. Ixia entered the NPMD market through acquisition of Net Optics in 2013 and its Spyke monitoring product. The tool is aimed at small or midsize businesses (SMBs), although it can support gigabit and 10G environments. The Spyke tool has been subject to an end of life (EOL) announcement, with end of sale (EOS) beginning 31 October 2014, and EOL beginning 31 October 2017.

Given Ixia’s focus on the network packet broker (NPB) space, it can cover NPMD and NPB use cases, something only a few other vendors can claim. Ixia launched a new NPB platform, the Network Tool Optimizer (NTO) 7300 in 1H14, which provides large-scale chassis design and additional modules that help offload some NPMD capabilities. The goal of these modules is optimal use of the existing end-user NPMD tool. Modules include Ixia Packet Capture Module (PCM) with 14GB of triggered packet capture at 40GbE line rates and 48 ports of NPB, and the Ixia Application and Threat Intelligence (ATI) Processor, which provides extensive processing power in addition to 48 ports of NBP. The ATI Processor requires a subscription at an additional recurring cost. The new 7300 product and platform has no current Gartner-verified customer references. Fundamental VoIP, application visibility and end-user experience metrics are standard capabilities. While the tool provides packet inspection and application visibility, product updates have not been observed for some time and the road map remains unclear.

Ixia’s NPMD revenue is between $5 million and $10 million per year. Ixia did not respond to requests for supplemental information and/or to review the draft contents of this document. Gartner’s analysis for this vendor is therefore based on other credible sources, including previous vendor briefings and interactions, the vendor’s own marketing collateral, public information and discussions with more than 200 end users who either have evaluated or deployed each NPMD product.


  • Ixia’s ATI Processor provides visibility of, and rules to classify, traffic based on application types and performance of applications.
  • Ixia has significant R&D resources. Of the 1,800 staff, more than 800 are engineering- and R&D-focused.
  • Ixia’s market leadership in NPB allows it to leverage scalable hardware design with software capabilities to enable NPMD and additional troubleshooting needs by offloading some of these requirements from other more comprehensive NPMD tools.


  • With the EOS of the Spyke and Net Optics appTap platforms, Ixia appears to have discontinued investments in pure NPMD capabilities.
  • Since the launch of the NTO 7300 platform in early 2014, there has been limited traction due to existing NPB investments and high cost for the hardware buy-in.
  • Financial reporting restatements and filing delays, combined with the resignation of two senior corporate officers, may hinder overall strategic focus and vision.

JDSU (Network Instruments)

In 2014, we have witnessed the completion of JDSU’s acquisition of Network Instruments, its subsequent integration into JDSU’s Network and Service Enablement business segment, the recent release of updates to its NPMD offering, and announced plans to separate JDSU into two entities in 2015. While this action could provide additional efficiencies and focus in the future, the preceding business integration and sales enablement efforts are only now beginning to bear fruit and will have to shift once more in response to the coming changes. The Network Instruments unit has followed a well-established, vertically integrated technology development strategy, designing and manufacturing most of its product components and software. An OEM relationship with CA Technologies, which had Network Instruments providing its GigaStor products to CA customers, devolved into a referral relationship, but no meaningful challenges have been voiced by Gartner clients as a result of this. Two key parts of the NPMD solution have new product names (Observer Apex and Observer Management Server) and a new, modern UI that is a significant improvement. Network Instruments’ current NPMD solution set is now part of the Observer Performance Management Platform 17, and includes Observer Apex, Observer Analyzer, Observer Management Server, Observer GigaStor, Observer Probes and Observer Infrastructure (v.4.2).

JDSU’s (Network Instruments) NPMD revenue is between $51 million and $150 million per year.


  • Data- and process-level integration workflows are well-thought-out across the solution’s component products.
  • Network Instruments’ recent addition of a network packet broker product (Observer Matrix) to its offerings may appeal to small-scale enterprises looking for NPMD and NPB capabilities from the same vendor.
  • Packet capture and inspection capability (via GigaStor) is well-regarded by clients.


  • While significant business integration activities have not, to date, had a perceptible impact on support or development productivity, this process is ongoing and now part of a larger business separation action that could result in challenges in the near future.
  • The NPMD solution requires multiple components with differing user interfaces that are not consistent across products.
  • The solution is focused on physical appliances, with limited options beyond proprietary hardware.

To learn more, download the full report here

Thanks to Gartner for the article.