Webinar- Best Practices for NCCM

Webinar- Best Practices for NCCM

Most networks today have a “traditional” IT monitoring solution in place which provides alarming for devices, servers and applications. But as the network evolves, so does the complexity and security risks and it now makes sense to formalize the process, procedures, and policies that govern access and changes to these devices. Vulnerability and lifecycle management also plays an important role in maintaining the security and integrity of network infrastructure.

Network Configuration and Change Management – NCCM is the “third leg” of IT management with traditional Performance and Fault Management (PM and FM) being one and two. The focus of NCCM is to ensure that as the network grows, there are policies and procedures in place to ensure proper governance and eliminate preventable outages.

Eliminating misapplied configurations can reduce network performance and security issues from 90% to 10%.

Learn about the best practices for Network Configuration and Change Management to both protect and report on your critical network device configurations

  1. Enabling of Real-Time Configuration Change Detection
  2. Service Design Rules Policy
  3. Auto-Discovery Configuration Backup
  4. Regulatory Compliance Policy
  5. Vendor Default and Security Access Policies
  6. Vulnerability Optimization and Lifecycle Announcements

Date: On October 28Th.
Time: 2:00pm Eastern

Webinar- Best Practices for NCCM

Register for webinar NOW: http://hubs.ly/H01gB720

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

InterComms talks to Marius Heuler, CTO Infosim®, about Infosim® StableNet® and the management and orchestration of SDN and NFV environments

Marius Heuler has more than 15 years of experience in network management and optimization. As CTO and founding member of Infosim®, he is responsible for leading the Infosim® technical team in architecting, developing, and delivering StableNet®. He graduated from the University of Würzburg with a degree in Computer Science, holds several Cisco certifications, and has subject matter expert knowledge in various programming languages, databases and protocol standards. Prior to Infosim®, Marius held network management leadership positions and performed project work for Siemens, AOK Bavaria and Vodafone.

Q: The terms SDN and NFV recently have been on everybody’s lips. However, according to the critics, it is still uncertain how many telcos and enterprises use these technologies already. What is your point of view on this topic?
A: People tend to talk about technologies and ask for the support of a certain interface, service, or technology. Does your product support protocol X? Do you offer service Y? What about technology Z?

Experience shows that when looking closer at the actual demand, it is often not the particular technology, interface, or service people are looking for. What they really want is a solution for their particular case. That is why I would rather not expect anybody to start using SDN or NFV as an end in itself. People will start using these technologies once they see that it is the best (and most cost-efficient) way to relieve their pain points.

Andrew Lerner, one of the Gartner Blog Network members, recently gave a statement pointing in the exact same direction, saying that Gartner won’t publish an SDN Magic Quadrant, “because SDN and NFV aren’t markets. They are an architectural approach and a deployment option, respectively.“

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

Q: You have been talking about use cases for SDN and NFV. A lot of these use cases are also being discussed in different standardization organizations or in research projects. What is Infosim®’s part in this?
A: There are indeed a lot of different use cases being discussed and as you mentioned a lot of different standardization and research activities are in progress. At the moment, Infosim® is committing to this area in various ways: We are a member of TM Forum and recently also joined the ETSI ISG NFV. Furthermore, we follow the development of different open source activities, such as the OpenDaylight project, ONOS, or OPNFV, just to name a few. Besides this, Infosim® is part of several national and international research projects in the area of SDN and NFV where we are working together with other subject matter experts and researchers from academia and industry. Topics cover among others operation and management of SDN and NFV environments as well as security aspects. Last but not least, Infosim® is also in contact with various hardware and software vendors regarding these topics. We thereby equally look on open source solutions as well as proprietary ones.

Q: Let us talk about solutions then: With StableNet® you are actually quite popular and successful in offering a unified network management solution. How do SDN and NFV influence the further development of your offering?
A: First of all, we are proud to be one of the leading manufacturers of automated Service Fulfillment and Service Assurance solutions. The EMAtm has rated our solution as the most definitive Value Leader in the EMAtm Radar for Enterprise Network Availability Monitoring Systems in 2014. We do not see ourselves as one of the next companies to develop and offer their own SDN controller or cloud computing solution. Our intent is rather to provide our well-known strength in unified network management for the SDN/NFV space as well. This includes topics like Service Assurance, Fault Management, Configuration, and Provisioning, Service Modelling, etc.

Q: Are there any particular SDN controller or cloud computing solutions you can integrate with?
A: There is a wide range of different SDN controllers and cloud computing solutions that are currently of general interest. In its current SDN controller report the SDxcentral gave an overview and comparison of the most common open source and proprietary SDN controllers. None of these controllers can be named as a definite leader. Equally regarding the NFV area, the recent EMAtm report on Open Cloud Management and Orchestration showed that besides the commonly known OpenStack there are also many other cloud computing solutions that enterprises are looking at and think of working with.

These developments remind me of something that, with my experience in network management, I have known for over a decade now. Also when looking at legacy environments there have always been competing standards. Despite years of standardization activities of various parties, often none of the competing standards became the sole winner and rendered all other interfaces or technologies obsolete. In fact, there is rather a broad range of various technologies and interfaces to be supported by a management system.

This is one of the strengths that we offer with StableNet®. We currently support over 125 different standardized and vendor-specific interfaces and protocols in one unified network management system. Besides this, with generic interfaces both for monitoring and configuration purposes we can easily integrate with any structured data source by the simple creation of templates rather than the complicated development of new interfaces. This way, we can shift the main focus of our product and development activities to the actual management and orchestration rather than the adaption to new data sources.

Q: Could you provide some examples here?
A: We continuously work on the extension of StableNet® with innovative new features to further automate the business processes of our customers and to simplify their daily work. Starting from Version 7, we have extended our existing integration interfaces by a REST API to further ease the integration with third party products. With Dynamic Rule Generation, Distributed Syslog Portal, and Status Measurements we offer the newest technologies for an efficient alarming and fault management. Our StableNet® Embedded Agent (SNEA) allows for an ultra-scalable, distributed performance monitoring as well as for the management of IoT infrastructures. Being part of our unified network management solution, all these functionalities, including the ultra-scalable and vendor-agnostic configuration management, can equally be used in the context of SDN and NFV. A good way to keep up-to-date with our newest developments are our monthly Global Webinar Days. I would really recommend you to have a look at those.

Q: As a last question, since we have the unique chance to directly talk with the CTO of Infosim®, please let us be a little curious. What key novelties can people expect to come next from Infosim®?
A: There are of course many things that I could mention here, but the two areas that will probably have the most significant impact on management and orchestration are our new service catalog and the new tagging concept. With the service catalog the management is moved from a rather device- or server-based perspective to a holistic service-based view. This tackles both the monitoring and the configuration perspective and can significantly simplify and speed up common business processes. This is of course also related to our new tagging concept.

This new approach is a small revolution to the way that data can be handled for management and orchestration. We introduce the possibility for an unlimited number of customizable tags for each entity, let it be a device, an interface, or an entire service, and combine this with automated relations and inheritance of proprieties between the different entities. Furthermore, the entities can be grouped in an automated way according to arbitrary tag criteria. This significantly extends the functionality, usability, and also the visualization possibilities.

Thanks to InterComms for the article.

Don’t Be Lulled to Sleep with a Security Fable. . .

Don’t Be Lulled to Sleep with a Security Fable. . .Once upon a time, all you needed was a firewall to call yourself “secure.” But then, things changed. More networks are created every day, every network is visible to the others, and they connect with each other all the time—no matter how far away or how unrelated.

And malicious threats have taken notice . . .

As the Internet got bigger, anonymity got smaller. It’s impossible to go “unnoticed” on the Internet now. Everybody is a target.

Into today’s network landscape, every network is under the threat of attack all the time. In reaction to threats, the network “security perimeter” has expanded in reaction to new attacks, new breeds of hackers, more regions coming online, and emerging regulations.

Security innovation tracks threat innovation by creating more protection—but this comes with more complexity, more maintenance, and more to manage. Security investment rises with expanding requirements. Just a firewall doesn’t nearly cut it anymore.

Next-generation firewalls, IPS/IDS, antivirus software, SIEM, sandboxing, DPI: all of these tools have become part of the security perimeter in an effort to stop traffic from getting in (and out) of your network. And they are overloaded, and overloading your security teams.

In 2014, there were close to 42.8 million cyberattacks (roughly 117,339 attacks each day) in the United States alone. These days, the average North American enterprise fields around 10,000 alerts each day from its security systems—way more than their IT teams can possibly process—a Damballa analysis of traffic found.

Your network’s current attack surface is huge. It is the sum of every access avenue an attacker could use to enter your network (or take data out of your network). Basically, every connection to and/or from anywhere.

There are two types of traffic that hit every network: The traffic worth analyzing for threats, and the traffic not worth analyzing for threats that should be blocked immediately before any security resource is wasted inspecting or following up on it.

Any way to filter out traffic that is either known to be good or known to be bad, and doesn’t need to go through the security system screening, reduces the load on your security staff. With a reduced attack surface, your security resources can focus on a much tighter band of information, and not get distracted by non-threatening (or obviously threatening) noise.

Thanks to Ixia for the article.

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope


5 Reasons Why You Should Include LAN Switches in Your NCCM Scope1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

5 Perks of Network Performance Management

Network performance management is something that virtually every business needs, but not something that every business is actively doing, or even aware of. And why should they?

While understanding the technical side of things is best left to the IT department, understanding the benefits of a properly managed network is something that will help get the business managers on board, especially when good performance management solutions might be a cost that hadn’t been considered. So what are the benefits?
5 Perks of Network Performance Management1. Avoiding downtime – Downtime across an entire network is going to be rare, but downtime in small areas of the network are possible if it gets overloaded. Downtime of any kind is just not something that business can tolerate, for a few reasons:

  • it leaves that area of the network unmonitored, which is a serious security issue
  • shared files won’t be accessible, nor will they be updating as users save the files. This will lead to multiple versions of the same file, and quite a few headaches when the network is accessible again
  • downtime that affects customers is even worse, and can result in lost revenue or negative customer experiences

2. Network speed – This is one of the most important and easily quantified aspects of managing netflow. It will affect every user on the network constantly, and anything that slows down users means either more work hours or delays. Obviously, neither of these is a good problem to have. Whether it’s uploading a file, sending a file to a coworker, or sending a file to a client; speed is of paramount importance.

3. Scalability – Almost every business wants to grow, and nowhere is that more true than the tech sector. As the business grows, the network will have to grow with it to support more employees and clients. By managing the performance of the network, it is very easy to see when or where it is being stretched too thin or overwhelmed. As performance degrades, it’s very easy to set thresholds that show when the network need upgraded or enlarged.

4. Security – Arguably the most important aspect of network management, even though it might not be thought of as a performance aspect. An unsecured network is worse than a useless network, and data breaches can ruin a company. So how does this play into performance management?

By monitoring netflow performance, it’s easy to see where the most resources are being used. Many security attacks drain resources, so if there are resource spikes in unusual areas it can point to a security flaw. With proper software, these issues can be not only monitored, but also recorded and corrected.

5. Usability – Unfortunately, not all employees have a working knowledge of how networks operate. In fact, as many in IT support will attest, most employees aren’t tech savvy. However, most employees will need to use the network as part of their daily work. This conflict is why usability is so important. The easiest way to minimize training costs with any network management program is to ensure that it is as user-friendly as possible.

The fanciest, most impressive network performance management system isn’t worth anything if no one knows how to use and optimize it properly. Even if the IT department has no issues with it, the reports and general information should be as easy to decipher as possible.

Is your network as optimized as it could be? Are you able to monitor the network’s performance and flow, or do network forensics to determine where issues are? Don’t try to tackle all of this on your own, contact us and let us help you support your business with the best network monitoring for your specific needs.

5 Perks of Network Performance Management

Thanks to NetFlow Auditor for the article.

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

CT523-328-2ac-1n-bat LANforge WiFIRE 802.11a/b/g/n/ac 3 radio WiFi Traffic Generator (with Battery) Supporting 328 Virtual STA Interfaces

The CT523-328-2ac-1n-bat wireless traffic generator is an excellent choice for testing Access Points and other WiFi networks. The CT523-328-2ac-1n-bat uses a modified Wireless driver for WiFi NICs based on the Atheros chipset. The ath9k (a/b/g/n) chipset NICs can support up to 200 stations per radio. The ath10k (a/b/g/n/ac) chipset NICs can support up to 64 stations per radio. Each of the Virtual Stations has its own IP address, IP port space, MAC address and routing table. The Virtual Stations can be assigned to communicate to a particular Access Point, use a particular SSID, and Open or WPA/WPA2 authentication assigned. More advanced 802.1X authentication is also included. Each radio can be configured idependently of the other. Transmit power and channel/frequency is configured on a per-radio basis. Most other settings are configurable per virtual station.

There are two ath10k a/b/g/n/ac and one ath9k a/b/g/n WiFi radios per CT523-328-2ac-1n-bat and multiple LANforge systems can be clustered together for even more realistic radio interference patterns and increased traffic generation capability.

All virtual stations on the same radio must be on the same frequency, but as long as the protocol supports that fequency, the multiple protocols can be used concurrently. For instance, if the radio is configured for a 2.4Ghz channel, the stations can be /b, /g, /n, or /ac. If the radio is on a 5Ghz channel, the stations can be /a, /n or /ac. The bandwidth can be configured for all protocols. For 802.11n, configuring the MCS rates also determines the number of streams (1×1, 2×2, 3×3, etc.).

NOTE: ath10k 802.11ac radios and stations may be more limited in rate selection and other features for the initial release.

The Virtual Stations may be configured with all of the virtual interfaces on the same subnet, or different subnets, depending on the testing requirements. When used with something like VoIP, it allows all of the VoIP calls to use the standard IP ports (with one call per virtual interface).

The CT523-328-2ac-1n-bat has no fans and is silent. It has 9 antenna. It will fit into a small travel bag or briefcase for easy portability. No additional hardware or software is required, but it is suggested that you manage the system using the LANforge GUI on a separate machine. The CT523-328-2ac-1n-bat can also be managed over a serial console in text mode or through directly connected monitor, mouse and keyboard.

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

Quick Start Guide

  1. Connect Management Ethernet port to Management network or management PC. If connecting directly to a PC, an Ethernet cross-over cable should be used.
  2. Connect eth1 wired Ethernet interface to wired Ethernet interface on the AP or network under test. This usually is considered the ‘server’ side of the network.
  3. The Client side of the network will be the Virtual Stations configured on the CT523-328-2ac-1n WiFi NIC(s).
  4. Connect power brick to standard US or European AC power source. If using external battery pack, then connect to that instead.
  5. Install the LANforge-GUI on a separate management PC or Laptop. Windows and Linux GUIs are supported: Select the correct one from the CDROM or Candela Technologies Download page and install it. The CT523-328-2ac-1n appliance has a web server that also provides the LANforge GUIs.
  6. The CT523-328-2ac-1n should now boot. If DHCP is enabled on the Management network, the CT523-328-2ac-1n will automatically acquire an IP address. If DHCP is not available, the IP address will be set to 192.168.1.101 by the LANforge scripts.
  7. Start the LANforge-GUI on the management PC and click the ‘Discover’ button. It should find the CT523-328-2ac-1n appliance and add the IP address to the drop-down box in the Connect widget. Press ‘Connect’ and you will be connected to the CT523-328-2ac-1n.
  8. Select the Port Mgr tab in the GUI. Double-click on the device called ‘wiphy0’. This is the Radio device, and should be configured for the correct, channel, country-code, etc. Next, select one or more of the Virtual Station interfaces and click ‘Modify’. Enter the correct IP address information, SSID and WEP or WPA/WPA2 key (if Enabled). After applying these changes, the Virtual Station interface should associate with the AP and be ready to send traffic. You may create up to 328 Virtual Station interfaces per CT523-328-2ac-1n with the ‘Create’ button.
  9. Once the interfaces are configured correctly, you can click on the Layer 3, VOIP/RTP and other LANforge-FIRE related GUI tabs and configure/modify/start/stop particular traffic patterns that utilize the virtual stations and wired Ethernet interface. In most cases, you will want one of the FIRE endpoints to be on the wired interface and the other to be on the WiFi Virtual Station interface. It is also valid to generate traffic between two Virtual Station interfaces. The GUI Plugins menu (and right-click on some tables) provides some plugins to do automated testing and reporting. Contact support if you have suggestions for improvements.
  10. Any GUI modifications take place immediately after you click ‘Submit’.

LANforge WiFIRE Related Images
Virtual Station Configuration Screen

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

Layer 3 (Ethernet, UDP, TCP) Connections

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

Layer 3 Create/Modify Screen

Remote Location Testing? Transmit WiFi Traffic at a Remote Site for 12 Hours with LANforge WiFIRE

Software Features

  • Supports real-world protocols:
    • Layer 2: Raw-Ethernet.
    • 802.1Q VLANs.
    • PPPoE: Integrated PPPoE support.
    • Layer 3: IPv4, IPv6, UDP/IP, IGMP Multicast UDP, TCP/IP.
    • Layer 4: FTP, HTTP, HTTPS, TFTP, SFTP, SCP
    • 802.11a/b/g/n Wireless Station (up to 200 per machine).
    • 802.11a/b/g/n/ac Wireless Station (up to 128 per machine).
    • Layer 4: TELNET, PING, DNS, SMTP, NMAP (via add-on script).
    • File-IO: NFSv3, NFSv4, CIFS, iSCSI.
  • Supports up to 1000 concurrent TCP connections with base license package.
  • The CT523-328-2ac-1n-bat is able to push up to 345Mbps through an AP, depending on the protocols mix, wireless mode and environment, and speed of the network under test. Supports at least 60 VoIP (SIP, RTP) calls if appropriate licenses are purchased. When all two ath10k a/b/g/n/ac and one ath9k a/b/g/n radios are configured on different channels, combined maximum upload speed exceeds 625Mbps (NOTE: Tested with 802.11a/b/g/n NICs. The ath10k a/b/g/n/ac chipset NICs have not been performance tested yet.) More powerful systems are also available.
  • Supports real-world compliance with ARP protocol.
  • Supports ToS (QoS) settings for TCP/IP and UDP/IP connections.
  • Uses publicly available Linux and Windows network stacks for increased standards compliance.
  • Utilizes libcurl for FTP, HTTP and HTTPS (SSL), TFTP and SCP protocols.
  • Supports file system test endpoints (NFS, CIFS, and iSCSI file systems, too!). File system mounts can use the virtual interface feature for advanced testing of file server applications.
  • Supports custom command-line programs, such as telnet, SMTP, and ping.
  • Comprehensive traffic reports include: Packet Transmit Rate, Packet Receive Rate, Packet Drop %, Transmit Bytes, Receive Bytes, Latency, Jitter, various Ethernet driver level counters, and much more.
  • Supports generation of reports that are ready to be imported into your favorite spread-sheet.
  • Allows packet sniffing and network protocol decoding with the integrated Wireshark protocol sniffer.
  • GUI runs as Java application on Linux, Solaris and Microsoft Operating Systems (among others).
  • GUI can run remotely, even over low-bandwidth links to accommodate the needs of the users.
  • Central management application can manage multiple units, tests, and testers simultaneously.
  • Includes easy built-in scripting for iterating through rates and packet sizes, with automated reporting. Also supports scriptable command line interface (telnet) which can be used to automate test scenarios. Perl libraries and example scripts are provided!
  • Automatic discovery of LANforge data generators simplifies configuration of LANforge test equipment.
  • LANforge traffic generation/management software is supported on Linux, Solaris and MS Windows.

Hardware Specification

  • High-End Appliance with no fans.
  • Operating System: Fedora Linux with customized 64-bit Linux kernel.
  • Two 1Gbps Ethernet ports, room for three wifi NICs.
  • Intel i7-2655LE 2.2 GHz processor.
  • RJ45 Serial console (115200 8 N 1) for console management & initial configuration.
  • VGA/DVI-D, USB ports for desktop usage.
  • 8 GB RAM.
  • 40 GB Solid State Hard Drive.
  • Larger storage drives available.
  • 9-30v 4AMP external power supply (brick).
  • Weight: 8 lbs
  • Dimensions: 11 x 8 x 2.6 inches Metric: 277 x 194 x 67 mm.
  • Operating Temperature: -20 ~ 55°C.
  • Certification: CE Emission, FCC Class A, RoHS Compliant.
  • UPS-500AD External battery with 12+ hours runtime.
    • Capacity:Lithium battery 12v 26Ah 288Wh; Total Efficiency: Rated 500W, Peak 1000W; Output Waveform: Modified Sine Wave
    • AC Input Voltage:110-220V 50/60Hz
    • AC Output Voltage:110V 60Hz or 220V 50Hz
    • DC Output(4 barrel ports):12V/8A (10A MAX);
    • USB Output(4 ports):5V/ 6.2A
    • LED Light:1W,Max 3W
    • Solar Input Charging Panel (Panel Not Included): Voltage 18V 20-100W
    • Overload, Short circuit protection
    • Size:12.60 x 5.91 x 8.66in
    • Weight:3.2 kgs

Additional Feature Upgrades

Unless otherwise noted in the product description, these features usually cost extra:

  • WanPaths (LANforge-ICE feature set)
  • Virtual Interfaces: MAC-VLANs, 802.1Q VLANs, WiFi stations, etc
  • FIRE Connections: Base FIRE license includes 1000 active connections.
  • WiFi RF Attenuator: Adjust WiFi signal strength in a controllable manner.
  • SMA RF Cable Bundle: Used to cable LANforge WiFIRE radios to device-under-test.
  • LANforge-ICE Network Emulation.
  • VOIP: Each concurrent call over the included package requires a license.
  • Armageddon: Each pair of ports requires a license if not already included.

Thanks to Candela for the article.

The Benefits of Using 2-Wire Digital Master Clock System

The Benefits of Using 2-Wire Digital Master Clock SystemIf you are considering a wired clock system for your facility, the 2-wired Digital Master Clock System option with Sapling may be the best option for you. Take a look at the unique advantages of this system below. In addition to the written description, check out our video at the bottom to see a visual depiction of how the 2-Wire Digital Clock System works.

Power/Data on the Same Line

Most wired clock systems require three or four wires. With Sapling’s 2-Wire Digital Communication System, the converter box supplies the power and amplifies the data, so that power and data are integrated on the same line. Fewer wires mean a cleaner, less cumbersome and more efficient system.

Instant Correction

As with all of Sapling’s clock systems, our goal is to provide synchronized, accurate time to keep your education, healthcare or business facility operating at its best. The 2-Wire Digital Communication System provides time updates to all of the clocks as often as once per second. With such frequent corrections, your clocks are guaranteed to show the accurate time, all the time. Another auto correction feature is the 5 minute synchronization after a power outage. If power is lost, you won’t have to worry about resetting the clocks or waiting a few hours for them to be re-synchronized. Within five minutes of getting power back, the master clock will send a signal to reset all of the clocks to the accurate time. Even if a power outage causes some temporary chaos in other areas, clock malfunctions or time inaccuracies will not be issues to add to the mix. Sapling takes care of that part for you.

Effortless Installation

The installation of the 2-Wired System is simple and straightforward for a few reasons. First, the low voltage requirement means that you do not need a certified electrician to install the system in most countries. Having two wires going from the master clock to each individual clock instead of four also makes setup quicker and easier. Even if a mistake is made with the two wires, our cutting-edge reverse polarity detection technology will recognize the error and autocorrect it. What could be easier than that?

Hopefully, the only thing easier is making the decision to install Sapling’s 2-Wire Digital Master Clock System for its advanced technological capabilities, ease, accuracy and the superior quality and service that you can expect from Sapling.

The Benefits of Using 2-Wire Digital Master Clock System

Thanks to Sapling for the article.

Key Factors in NCCM and CMDB Integration – Part 1 Discovery

Part I Discovery

Key Factors in NCCM and CMDB Integration - Part 1 Discovery“I am a rock, I am an Island…” These lyrics by Simon and Garfunkel pretty appropriately summarize what most IT companies would like you to believe about their products. They are islands that stand alone and don’t need any other products to be useful. Well, despite what they want, the truth is closer to the lyrics by the Rolling Stones – “We all need someone we can lean on”. Music history aside, the fact is that interoperability and integration is one of the most important keys to a successful IT Operations Management system. Why? Because no product truly does it all; and, when done correctly, the whole can be greater than the sum of the individual parts. Let’s take a look at the most common IT asset management structure and investigate the key factors in NCCM and CMDB Integration.

Step 1. Discovery. The heart of any IT operations management system is a database of the assets that are being managed. This database is commonly referred to as the Configurations Management Database or CMDB. The CMDB contains all of the important details about the components of an IT system and the relationships between these items. This includes information regarding the components of an asset like physical parts and operating systems, as well as upstream and downstream dependencies. A typical item in a CMDB may have hundreds of individual pieces of information about it stored in the database. A fully populated and up to date CMDB is an extremely useful data warehouse. But, that begs the question, how does a CMDB get to be fully populated in the first place?

That’s where Discovery software comes in. Inventory discovery systems can be used to automatically gather these critical pieces of asset information directly from the devices themselves. Most hardware and software vendors have built in ways of “pulling” that data from the device. Network systems mainly use SNMP. Windows servers can also use SNMP as well as the Microsoft proprietary WMI protocol. Other vendors like VMware also have an API that can be accessed to gather this data. Once the data has been gathered, the discovery system should be able to transfer that data to the CMDB. It may be a “push” from the discovery system to the CMDB, or it could use a “pull” to go the other way – but there should always be a means of transfer. Especially when the primary “alternative” way of populating the CMDB is either by manually entering the data (sounds like fun) or by uploading spreadsheet csv files (but how do they get populated?).

Step 2. Updating. Once the CMDB is populated and running then you are done with the discovery software right? Um, wrong. Unless your network never changes (please email me if that is the case, because I’d love to talk to you), then you need to constantly update the CMDB. In fact, in many organizations, the CMDB has a place in it for pre-deployment. Meaning that new systems which are to come online soon are entered into the CMDB. The could news is that our discovery system should be able to get that information out of the CMDB and then use it as the basis for a future discovery run, which in turn adds details about the device back to the CMDB and so on. When implemented properly and working well, this cyclical operation really can save enormous amounts of time and effort.

In the next post in this series, I’ll explore how having an up to date asset system makes other aspects of NCCM like Backup, Configuration, and Policy Checking much easier.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.

Rogers Set to ‘Ignite’ Gigabit Rollout in Toronto; 4m Homes to be Covered by End-2016

Rogers Communications, Canada’s second largest broadband provider by subscribers, has confirmed that the rollout of its planned 1Gbps internet service will commence this year in downtown Toronto and the Greater Toronto Area (GTA). Parts of the city earmarked for coverage include: Harbourfront, Cabbagetown, Riverdale, King Street West, Queen Street West, the Financial District, the Discovery District, Yonge & Bloor, Vaughan, Markham, Richmond Hill, Pickering, Ajax and Whitby. By the end of 2016 the Gigabit service – which will be branded ‘Ignite’ – will be available to over four million homes, representing Rogers entire cable footprint across Ontario and Atlantic Canada.

Thanks to TeleGeography for the article. 

Cyber Attacks – Businesses Held for Ransom in 2015

Cyber Attacks – Businesses Held for Ransom in 2015Really nice crisp clear morning here in Galway, bit chilly though. Before I dropped my 14 year old son to school, I tuned into an Irish station, NewsTalk and caught most of a very interesting conversation between a member of a large Irish law firm, William Fry and the presenter.

They were discussing the increasing threat of cyber-attack for Irish businesses. They spoke about the importance of detection as 43% of business are not even aware that they are being attacked and the hackers can have access for weeks/months before they are detected.

They also indicated that 4 out of 5 businesses have been impacted, hard to believe but if this also includes recent Ransomware attacks for example, based on feedback from NetFort customers I would believe it. Maybe also as large enterprise are spending more on security and have ‘tightened up’ the hacker has moved on, redefined the ‘low hanging fruit’, it is now the small to medium enterprise (SME)?

It reminds me of a discussion I had last year with a network admin of a college in Chicago. ‘John, we are entering an era where continuous monitoring, visibility is becoming more and more critical because there is no way all the inline active systems can protect us internally and externally these days’.

I am biased but I think he is absolutely correct. Visibility, actionable intelligence, data normal users can read and interpret and act on is critical.

Visibility not just at the edge though, also at the core, the internal network because it is critical to be able to see and detect suspicious activity or network misuse here also. It is also important to track this, to keep a record of it to help troubleshoot, to provide proof for management, auditors and even users.

I was discussing some recent LANGuardian use cases with an adviser in the US this week and mentioned that we are hearing the term ‘network misuse’ a lot more these days and I was not sure why. Maybe organizations are becoming more concerned about data theft?

His explanation makes sense, it was all about the attack surface for him, if users are misusing the network, accessing sites and applications that are non-critical or inappropriate and infected, it is increasing the attack surface, the security risk and will result in pain for everybody.

In defence of Irish business though, a lot of the systems out there in this space are only suitable for large enterprises, too expensive and complex to manage, tune and get real actionable intelligence. The SMEs all over the world, not just Ireland cannot afford them in terms of time, people and money.

Thanks to NetFort for the article.