How Testing Solutions Reduce Risk & Improve Customer Satisfaction

How Testing Solutions Reduce Risk & Improve Customer SatisfactionImagine you’re trying to book a flight. You call the toll-free number and use the interactive voice response (IVR) to get through to bookings, but instead you are put through to the baggage area. You hang up and try again, but this time you wind up speaking to the airline lounge. Do you try a third time or call a competitor? I know what I would do.

The IVR is now a key component to delivering a great customer experience, so what steps should a business take to ensure these systems are working optimally? Do they take proactive measures, or just wait until a customer lets them know that something is broken? And, by the time it gets to this stage, how many customers may have been lost?

There are some businesses out there taking unnecessary risks when it comes to testing the reliability of their communications systems. Instead of performing extensive tests, they’re leaving it up to their customers to find any problems. Put bluntly, they’re rolling the dice by deciding to deploy systems that haven’t been properly tested. This is the primary line of communication with their customers and, in many cases, it’s also how they generate significant revenue, why would they put both customer satisfaction and revenue in jeopardy?

Businesses have quite a few useful options when it comes to proactive testing. We recently acquired IQ Services, a company that tests these environments on a scheduled basis to make sure they’re working properly. It’s an automated process that tests how long it takes to answer, makes sure that the correct responses are given, and even performs a massive stress test with up to 80,000 concurrent calls. (It’s very useful for scenarios such as a large healthcare provider going through open enrollment.) These testing solutions are the way that businesses can ensure that their systems are working reliably under heavy load without leaving anything to chance.

In a world where we think of people as risk-averse, it’s interesting to observe anyone who chooses not to perform these tests. It’s not necessarily a conscious decision if the situation were actually framed in a way where someone knew exactly what they were putting at risk, they’d probably make a better choice. You wouldn’t buy car insurance after you already had an accident. It simply wouldn’t do you much good at that point. The same thing applies to your communications systems. It only makes sense to take a proactive approach to make sure things are working as expected.

Now that you’re aware of what’s at risk if you don’t perform these important tests, don’t make the conscious decision to wait until something has already gone wrong. We’re talking about the potential loss of millions of dollars per hour (or even per minute in certain cases). Some strategic planning can give you the peace of mind you’ll avoid catastrophic loss of revenue in the future. Whenever you do go live with a new feature, you can do so with confidence.

We’ve brought these new Testing Solutions into the Prognosis family. Above and beyond, we want to make sure people understand these capabilities are available. You don’t have to be reactionary, there are proactive solutions to stop you from rolling the dice when it comes to your business and customers. Don’t leave the livelihood of your organization to chance. Of course, if you’re in the mood to gamble your money, there’s always Vegas.

Thanks to IR Prognosis for the article.

Advertisements

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 – What’s New?

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?

Join Harald Hoehn, Senior Developer and Consultant with Infosim®, for a Webinar and Live Demo on the latest information regarding StableNet® 7.5

This Webinar will provide insight into:

StableNet® 7.5 New Features such as:

  • New Web Portal [Live Demo]
  • New Alarm Dashboard [Live Demo]
  • New NCCM Structurer [Live Demo]
  • DRG (Dynamic Rule Generation) as a new and revolutionary Fault Management concept

StableNet® 7.5 Enhancements such as:

  • Enhanced Weather Maps [Live Demo]
  • Improved Trap- and Syslog-Forwarding [Live Demo]
  • Advanced Netflow Features [Live Demo]
  • Enhanced Support for SDN

But wait – there is more! We are giving away three Amazon Gift Cards (value $50) on this Global Webinar Day. To join the draw, simply answer the trivia question that will be part of the questionnaire at the end of the Webinar. Good Luck!

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

How Can We Monitor Traffic Associated with Remote Sites?

How Can we Monitor Traffic Associated with Remote Sites?Many IT teams are now tasked with managing remote sites without having the luxury of local IT support. Business owners expect everything to be done remotely, we do live in the connected age, don’t we? Is it possible to see what is happening in these networks without the need for installing client or agent software everywhere?

You can gather some network performance information using SNMP or WMI but you will be limited to alerts or high level information. What you need is some form of deeper traffic analysis. Software applications that do traffic analysis are ideal for troubleshooting LAN and link problems associated with remote sites.

There are two main technologies available to analyze network traffic associated with remote sites, those that do flow analysis and those that capture network packets. Flow statistics are typically available from devices that can route data between two networks, most Cisco routers support NetFlow for example. If your remote networks are flat (single subnet) or you don’t have flow options on your network switches then packet capture is a viable option.

You can implement packet capture by connecting a traffic analysis system to a SPAN or mirror port on a network switch at your remote site. You can then log onto your traffic analysis system remotely to see what is happening within these networks.

How Can we Monitor Traffic Associated with Remote Sites?

NetFort LANGuardian has multiple means of capturing data associated with remote sites. The most popular option is to install an instance of the LANGuardian software at your HQ. Sensors can be deployed on physical or virtual platforms at important remote sites. Data from these is stored centrally to you get a single reference point for all traffic and security information across local and remote networks.

LANGuardian can also capture flow based statistics such as NetFlow, IPFix and SFlow, routers/switches on the remote sites can be configured to send flow traffic to LANGuardian. Watch out for issues associated with NetFlow as it has limitations when it comes to monitoring cloud computing applications.

Download White Paper

How to monitor WAN connections with NetFort LANGuardian

Download this whitepaper which explains in detail how you can monitor WAN connections with NetFort LANGuardian

How Can we Monitor Traffic Associated with Remote Sites?

How To Find Bandwidth Hogs

Thanks to NetFort for the article.

Webinar- Best Practices for NCCM

Webinar- Best Practices for NCCM

Most networks today have a “traditional” IT monitoring solution in place which provides alarming for devices, servers and applications. But as the network evolves, so does the complexity and security risks and it now makes sense to formalize the process, procedures, and policies that govern access and changes to these devices. Vulnerability and lifecycle management also plays an important role in maintaining the security and integrity of network infrastructure.

Network Configuration and Change Management – NCCM is the “third leg” of IT management with traditional Performance and Fault Management (PM and FM) being one and two. The focus of NCCM is to ensure that as the network grows, there are policies and procedures in place to ensure proper governance and eliminate preventable outages.

Eliminating misapplied configurations can reduce network performance and security issues from 90% to 10%.

Learn about the best practices for Network Configuration and Change Management to both protect and report on your critical network device configurations

  1. Enabling of Real-Time Configuration Change Detection
  2. Service Design Rules Policy
  3. Auto-Discovery Configuration Backup
  4. Regulatory Compliance Policy
  5. Vendor Default and Security Access Policies
  6. Vulnerability Optimization and Lifecycle Announcements

Date: On October 28Th.
Time: 2:00pm Eastern

Webinar- Best Practices for NCCM

Register for webinar NOW: http://hubs.ly/H01gB720

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

InterComms talks to Marius Heuler, CTO Infosim®, about Infosim® StableNet® and the management and orchestration of SDN and NFV environments

Marius Heuler has more than 15 years of experience in network management and optimization. As CTO and founding member of Infosim®, he is responsible for leading the Infosim® technical team in architecting, developing, and delivering StableNet®. He graduated from the University of Würzburg with a degree in Computer Science, holds several Cisco certifications, and has subject matter expert knowledge in various programming languages, databases and protocol standards. Prior to Infosim®, Marius held network management leadership positions and performed project work for Siemens, AOK Bavaria and Vodafone.

Q: The terms SDN and NFV recently have been on everybody’s lips. However, according to the critics, it is still uncertain how many telcos and enterprises use these technologies already. What is your point of view on this topic?
A: People tend to talk about technologies and ask for the support of a certain interface, service, or technology. Does your product support protocol X? Do you offer service Y? What about technology Z?

Experience shows that when looking closer at the actual demand, it is often not the particular technology, interface, or service people are looking for. What they really want is a solution for their particular case. That is why I would rather not expect anybody to start using SDN or NFV as an end in itself. People will start using these technologies once they see that it is the best (and most cost-efficient) way to relieve their pain points.

Andrew Lerner, one of the Gartner Blog Network members, recently gave a statement pointing in the exact same direction, saying that Gartner won’t publish an SDN Magic Quadrant, “because SDN and NFV aren’t markets. They are an architectural approach and a deployment option, respectively.“

SDN/NFV – From Theory to Praxis with Infosim® StableNet®

Q: You have been talking about use cases for SDN and NFV. A lot of these use cases are also being discussed in different standardization organizations or in research projects. What is Infosim®’s part in this?
A: There are indeed a lot of different use cases being discussed and as you mentioned a lot of different standardization and research activities are in progress. At the moment, Infosim® is committing to this area in various ways: We are a member of TM Forum and recently also joined the ETSI ISG NFV. Furthermore, we follow the development of different open source activities, such as the OpenDaylight project, ONOS, or OPNFV, just to name a few. Besides this, Infosim® is part of several national and international research projects in the area of SDN and NFV where we are working together with other subject matter experts and researchers from academia and industry. Topics cover among others operation and management of SDN and NFV environments as well as security aspects. Last but not least, Infosim® is also in contact with various hardware and software vendors regarding these topics. We thereby equally look on open source solutions as well as proprietary ones.

Q: Let us talk about solutions then: With StableNet® you are actually quite popular and successful in offering a unified network management solution. How do SDN and NFV influence the further development of your offering?
A: First of all, we are proud to be one of the leading manufacturers of automated Service Fulfillment and Service Assurance solutions. The EMAtm has rated our solution as the most definitive Value Leader in the EMAtm Radar for Enterprise Network Availability Monitoring Systems in 2014. We do not see ourselves as one of the next companies to develop and offer their own SDN controller or cloud computing solution. Our intent is rather to provide our well-known strength in unified network management for the SDN/NFV space as well. This includes topics like Service Assurance, Fault Management, Configuration, and Provisioning, Service Modelling, etc.

Q: Are there any particular SDN controller or cloud computing solutions you can integrate with?
A: There is a wide range of different SDN controllers and cloud computing solutions that are currently of general interest. In its current SDN controller report the SDxcentral gave an overview and comparison of the most common open source and proprietary SDN controllers. None of these controllers can be named as a definite leader. Equally regarding the NFV area, the recent EMAtm report on Open Cloud Management and Orchestration showed that besides the commonly known OpenStack there are also many other cloud computing solutions that enterprises are looking at and think of working with.

These developments remind me of something that, with my experience in network management, I have known for over a decade now. Also when looking at legacy environments there have always been competing standards. Despite years of standardization activities of various parties, often none of the competing standards became the sole winner and rendered all other interfaces or technologies obsolete. In fact, there is rather a broad range of various technologies and interfaces to be supported by a management system.

This is one of the strengths that we offer with StableNet®. We currently support over 125 different standardized and vendor-specific interfaces and protocols in one unified network management system. Besides this, with generic interfaces both for monitoring and configuration purposes we can easily integrate with any structured data source by the simple creation of templates rather than the complicated development of new interfaces. This way, we can shift the main focus of our product and development activities to the actual management and orchestration rather than the adaption to new data sources.

Q: Could you provide some examples here?
A: We continuously work on the extension of StableNet® with innovative new features to further automate the business processes of our customers and to simplify their daily work. Starting from Version 7, we have extended our existing integration interfaces by a REST API to further ease the integration with third party products. With Dynamic Rule Generation, Distributed Syslog Portal, and Status Measurements we offer the newest technologies for an efficient alarming and fault management. Our StableNet® Embedded Agent (SNEA) allows for an ultra-scalable, distributed performance monitoring as well as for the management of IoT infrastructures. Being part of our unified network management solution, all these functionalities, including the ultra-scalable and vendor-agnostic configuration management, can equally be used in the context of SDN and NFV. A good way to keep up-to-date with our newest developments are our monthly Global Webinar Days. I would really recommend you to have a look at those.

Q: As a last question, since we have the unique chance to directly talk with the CTO of Infosim®, please let us be a little curious. What key novelties can people expect to come next from Infosim®?
A: There are of course many things that I could mention here, but the two areas that will probably have the most significant impact on management and orchestration are our new service catalog and the new tagging concept. With the service catalog the management is moved from a rather device- or server-based perspective to a holistic service-based view. This tackles both the monitoring and the configuration perspective and can significantly simplify and speed up common business processes. This is of course also related to our new tagging concept.

This new approach is a small revolution to the way that data can be handled for management and orchestration. We introduce the possibility for an unlimited number of customizable tags for each entity, let it be a device, an interface, or an entire service, and combine this with automated relations and inheritance of proprieties between the different entities. Furthermore, the entities can be grouped in an automated way according to arbitrary tag criteria. This significantly extends the functionality, usability, and also the visualization possibilities.

Thanks to InterComms for the article.

Don’t Be Lulled to Sleep with a Security Fable. . .

Don’t Be Lulled to Sleep with a Security Fable. . .Once upon a time, all you needed was a firewall to call yourself “secure.” But then, things changed. More networks are created every day, every network is visible to the others, and they connect with each other all the time—no matter how far away or how unrelated.

And malicious threats have taken notice . . .

As the Internet got bigger, anonymity got smaller. It’s impossible to go “unnoticed” on the Internet now. Everybody is a target.

Into today’s network landscape, every network is under the threat of attack all the time. In reaction to threats, the network “security perimeter” has expanded in reaction to new attacks, new breeds of hackers, more regions coming online, and emerging regulations.

Security innovation tracks threat innovation by creating more protection—but this comes with more complexity, more maintenance, and more to manage. Security investment rises with expanding requirements. Just a firewall doesn’t nearly cut it anymore.

Next-generation firewalls, IPS/IDS, antivirus software, SIEM, sandboxing, DPI: all of these tools have become part of the security perimeter in an effort to stop traffic from getting in (and out) of your network. And they are overloaded, and overloading your security teams.

In 2014, there were close to 42.8 million cyberattacks (roughly 117,339 attacks each day) in the United States alone. These days, the average North American enterprise fields around 10,000 alerts each day from its security systems—way more than their IT teams can possibly process—a Damballa analysis of traffic found.

Your network’s current attack surface is huge. It is the sum of every access avenue an attacker could use to enter your network (or take data out of your network). Basically, every connection to and/or from anywhere.

There are two types of traffic that hit every network: The traffic worth analyzing for threats, and the traffic not worth analyzing for threats that should be blocked immediately before any security resource is wasted inspecting or following up on it.

Any way to filter out traffic that is either known to be good or known to be bad, and doesn’t need to go through the security system screening, reduces the load on your security staff. With a reduced attack surface, your security resources can focus on a much tighter band of information, and not get distracted by non-threatening (or obviously threatening) noise.

Thanks to Ixia for the article.

5 Reasons Why You Should Include LAN Switches in Your NCCM Scope

We’ve been doing a lot of blogging around here lately about NCCM and the importance of having an automated configuration and change management system. We’ve even published a Best practices guide for NCCM. One of the main points in any NCCM system is having consistent and accurate configuration backups of all of your “key” devices.

When I ask Network Managers to name their key devices, they generally start with WAN / Internet routers and Firewalls. This makes sense of course because, in a modern large-scale network, connectivity (WAN / Internet routers) & security (Firewalls) tend to get most of the attention. However, we think that it’s important not to overlook core and access switching layers. After all, without that “front line” connectivity – the internal user cannot get out to the WAN/Internet in the first place.

With that in mind, today’s blog offers up 5 Reasons Why You Should Include LAN Switches in Your NCCM Scope


5 Reasons Why You Should Include LAN Switches in Your NCCM Scope1. Switch Failure

LAN switches tend to be some of the most utilized devices in a network. They also don’t generally come with the top quality hardware and redundant power supplies that core devices have. In many cases, they may also be located on less than pristine locations. Dirty manufacturing floors, dormitory closets, remote office kitchens – I have seen access switches in all of these places. When you combine a heavy workload with tough conditions and less expensive part, you have a recipe for devices that will fail at a higher rate.

So, when that time comes to replace / upgrade a switch, having its configuration backed up and a system which can automate the provisioning of the new system can be a real time and workload saver. Just put the IP address and some basic management information on the new device and the NCCM tool should be able to take off the rest in mere minutes.

2. User Tracking

As the front line connectivity device for the majority of LAN users, the switch is the best place to track down user connections. You may want to know where a particular user is located, or maybe you are trying to troubleshoot an application performance issue; no matter what the cause, it’s important to have that connectivity data available to the IT department. NCCM systems may use layer 2 management data from CDP/LLDP as well as other techniques to gather this information. A good system will allow you to search for a particular IP/MAC/DNS and return connectivity information like which device/port it is connected to as well as when it was first and last seen on that port. This data can also be used to draw live topology maps which offer a great visualization of the network.

3. Policy Checking

Another area where the focus tends to be on “gateway” devices such as WAN routers and firewalls is policy checking. While those devices certainly should have lots of attention paid to them, especially in the area of security policies, we believe that it’s equally as important not to neglect the access layer when it comes to compliance. In general terms, there are two aspects of policy checking which need to be addressed on these devices: QoS policies and regulatory compliance policies.

The vast majority of VoIP and Video systems will connect to the network via a traditional LAN switch. These switches, therefore, must have the correct VLAN and QoS configurations in order to accurately forward the traffic in the appropriate manner so that Quality of Service is maintained.

If your organization is subject to regulatory compliance standards such as PCI, HIPAA etc then these regulations are applicable to all devices and systems that are connected to or pass sensitive data.

In both of these cases, it is incredibly important to ensure policy compliance on all of your devices, even the ones on the “edge” of your network.

4. Asset Lifecycle Management

Especially in larger and more spread out organizations, just understanding what you have can be a challenge. At some point (and always when you are least prepared for it) you will get the “What do we have?” question from a manager. An NCCM system is exactly the right tool to use to answer this question. Even though NCCM is generally considered to be the tool for change – it is equally the tool for information. Only devices that are well documented can be managed and that documentation is best supplied through the use of an automated inventory discovery system. Likewise, when it is time for a technology refresh, or even the build out of a new location or network, understanding the current state of the existing network is the first step towards building an effective plan for the future.

5. New Service Initiatives

Whether you are a large IT shop or a service provider – new applications and services are always coming. In many cases, that will require widespread changes to the infrastructure. The change may be small or larger, but if it needs to be implemented on a number of systems at the same time, it will require coordination and automation to get it done efficiently and successfully. In some instances, this will only require changes to the core, but in many cases it will also require changes to the switch infrastructure as well. This is what NCCM tools were designed to do and there is no reason that you should be handcuffed in your efforts to implement change just because you haven’t added all of your devices into the NCCM platform.

Networks are complicated systems of many individual components spread throughout various locations with interdependencies that can be hard to comprehend without the help of network management tools. While the temptation may be to focus on the core systems, we think that it’s critical to view all parts, even the underappreciated LAN switch, as equal pieces to the puzzle and, therefore, should not be overlooked when implementing an NCCM system.

Top 20 Best Practices for NCCM

Thanks to NMSaaS for the article.