Infosim® Announces Release of StableNet® 7.5

Infosim StableNet Network Monitoring SoftwareInfosim®, the technology leader in automated Service Fulfillment and Service Assurance solutions, today announced the release of version 7.5 of its award-winning software suite StableNet® for Telco and Enterprise customers.

StableNet® 7.5 provides a significant number of powerful new features, including:

  • Dynamic Rule Generation (DRG); a new and revolutionary Fault Management concept
  • REST interface supporting the new StableNet® iPhone (and upcoming Android) app
  • Highly customizable dashboard in both the GUI and Web Portal
  • Enabling integration with SDN/NFV element managers
  • NCCM structurer enabling creation of optimized and well-formatted device configurations
  • New High-Availability (HA) infrastructure based on Linux HA technology
  • Syslog & Trap Forwarding enabling integration of legacy systems that rely on their original trap & syslog data
  • Open Alarms GeoMap enabling geographical representation of open alarms

StableNet® version 7.5 is available for purchase now. Customers with current maintenance contracts may upgrade free of charge as per the terms and conditions of their contract.

Supporting Quotes:

Jim Duster, CEO Infosim® ,Inc.

“We are happy that our newest release is again full of innovative features like DRG. Our customers are stating this new DRG feature will help them receive a faster ROI by improving automation in their fault management area and dramatically increase the speed of Root-Cause Analysis.”

Download the release notes here

Thanks to Infosim for the article.

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 – What’s New?

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?

Join Harald Hoehn, Senior Developer and Consultant with Infosim®, for a Webinar and Live Demo on the latest information regarding StableNet® 7.5

This Webinar will provide insight into:

StableNet® 7.5 New Features such as:

  • New Web Portal [Live Demo]
  • New Alarm Dashboard [Live Demo]
  • New NCCM Structurer [Live Demo]
  • DRG (Dynamic Rule Generation) as a new and revolutionary Fault Management concept

StableNet® 7.5 Enhancements such as:

  • Enhanced Weather Maps [Live Demo]
  • Improved Trap- and Syslog-Forwarding [Live Demo]
  • Advanced Netflow Features [Live Demo]
  • Enhanced Support for SDN

But wait – there is more! We are giving away three Amazon Gift Cards (value $50) on this Global Webinar Day. To join the draw, simply answer the trivia question that will be part of the questionnaire at the end of the Webinar. Good Luck!

Infosim® Global Webinar Day October 29th, 2015 StableNet® 7.5 - What's New?
(Take a look at our previous Webinars here.)

Thanks to Infosim for the article.

NetFort LANGuardian Provides Visibility into SolarWinds

If you use SolarWinds and are looking for more visibility into security incidents, alerts and granular detail LANGuardian can integrate with SolarWinds NPM monitoring the traffic flowing through the network, giving you a comprehensive network monitoring package.

Netfort LANGuardian integrates with SolarWinds giving you visibility into all network and user activity across your network. With LANGuardian integration you get a single of pane of glass view of network operations and security. All the data is retained in its own built in database for months and does not have any performance impact on your SolarWinds deployment.

Here are the top reasons SolarWinds customers integrate with LANGuardian:

  1. Looking for a Network security tab and more visibility into suspicious activity or security incidents.
  2. Lack of granularity, actual proof when troubleshooting problems especially traffic/bandwidth/Internet. NTA not providing the detail required.
  3. No NetFlow available. Prospects want to see what is happening inside their networks, not just at the network edge but do not have Netflow or Netflow devices where visibility is required.
  4. Need to monitor activity on remote sites but do NOT want any extra traffic over expensive WAN links.
  5. Want to see usernames in reports and immediately see who is responsible for an event or traffic. IP address and machine names are no longer enough.
  6. Need to access historical metadata for forensics and investigating network, user and security incidents. Examples include finding the source of ransomware, who accidentally deleted or moved a folder or who is hogging all the bandwidth and detail on how.

You can access a live SolarWinds NPM system complete with the NetFort integration here: http://demo2.netfort.com/Orion/SummaryView.aspx?viewid=1

3 Key Cloud Monitoring Metrics

Reliably Measure Performance, and Ensure Effective Monitoring in the Cloud

When adopting monitoring strategies for cloud services, it’s important to establish performance metrics and baselines. This involves going beyond vendor-provided data, to ensure your monitoring tools provide:

Taking a holistic approach ensures your staff can manage any performance issue and provide proof to cloud providers if their systems are the cause of the issue.

CLOUD MONITORING CHALLENGES

The primary issue with ensuring reliable cloud performance revolves around the lack of metrics for monitoring the Service Level Agreement (SLA) and performance. Understanding these issues is critical for developing successful monitoring strategies.

HERE ARE A FEW COMMON CHALLENGES

Obtaining Application Performance Agreements

While vendors will highlight service or resource availability in their SLAs, application performance metrics or expectations are typically absent. In the case of Salesforce.com, SLAs (if one is provided) discuss downtime, but there aren’t any performance guarantees.

Lack of Performance Metrics

Similar to SLAs, organizations should not expect vendors to provide any meaningful performance metrics beyond service and resource availability. If managers rely on trust.salesforce.com to track CRM service performance and availability, they are limited to:

  • Monitoring server status
  • Transactions handled
  • Server processing speed

These reports fail to provide meaningful performance metrics to evaluate service degradation issues or to isolate problems to the cloud vendor.

No Meaningful Performance Benchmarks

Most Software as a Service (SaaS) vendors don’t offer any benchmarks or averages that allow you to forecast potential performance or service demand. While Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) vendors will provide cloud performance benchmarks, these numbers don’t take into account:

  • Internet latency
  • The location of your users to the services
  • Your network environment

The challenge is to properly benchmark and create meaningful metrics for your organization.

These challenges require organizations to take a holistic approach to monitoring by implementing solutions that allow them to seamlessly view external components and performance as if they were a part of their internal network.

EFFECTIVE CLOUD MONITORING STRATEGIES

Given the lack of metrics to assess performance for a specific organization, how do engineers successfully manage user interaction with cloud services?

ENSURE SERVICE PERFORMANCE

While SLAs may not guarantee performance, cloud vendors should take action when clear proof shows their systems are the source of the problem.

How can you ensure reliable performance?

Set up synthetic transactions to execute a specific process on the cloud provider’s site. By regularly conducting these transactions, monitoring routes and response times, you can pinpoint the potential source of delay between the internal network, ISP, and cloud provider. This data along with web error codes can be provided to the cloud vendor to help them resolve issues.

Network teams should have a true view of performance that tracks packets from the user over the ISP to the cloud provider, including any cloud-hosted components.

OVERCOME LACK OF PERFORMANCE METRICS

Depending upon the service, vendors will provide varying levels of detail. The type of service also impacts what you can monitor.

What metrics can you use?

In the case of SaaS, you can monitor user interactions and synthetic transactions, and rely on vendor-provided reports.

For PaaS, monitoring solutions such as Observer Infrastructure provide significant performance metrics. These metrics can be viewed alongside response time metrics for a more complete picture of service health.

In the case of IaaS, you have access to the server’s operating system and applications. In addition to polling performance metrics of cloud components, analysis devices can be placed on the cloud server for an end-to-end view of performance.

ADDRESS PERFORMANCE BENCHMARKS

With monitoring systems in place, it’s important to baseline response times and cloud component performance. They can help you become proactive in your management strategy. How can you use baselines?

Using these baselines, you can set meaningful benchmarks for your specific organization. From this point, alarms can be set to proactively alert teams of degrading performance. Utilizing long-term packet capture, you can investigate performance from the user to the cloud and isolate potential problems.

Thanks to Network Instruments for the article.

Ixia Study Finds That Hidden Dangers Remain within Enterprise Network Virtualization Implementations

Ixia (Nasdaq: XXIA), a leading provider of application performance and security resilience solutions, announced global survey results demonstrating that while most companies believe virtualization technology is a strategic priority, there are clear risks that need to be addressed. Ixia surveyed more than 430 targeted respondents in South and North America (50 percent), APAC (26 percent) and EMEA (24 percent).

The accompanying report titled, The State of Virtualization for Visibility Architecture™ 2015 highlights key findings from the survey, including:

  • Virtualization technology could create an environment for hidden dangers within enterprise networks. When asked about top virtualization concerns, over one third of respondents said they were concerned with their ability (or lack thereof) to monitor the virtual environment. In addition, only 37 percent of the respondents noted they are monitoring their virtualized environment in the same manner as their physical environment. This demonstrates that there is insufficient monitoring of virtual environments. At the same time, over 2/3 of the respondents are using virtualization technology for their business-critical applications. Without proper visibility, IT is blind to any business-critical east-west traffic that is being passed between the virtual machines.
  • There are knowledge gaps regarding the use of visibility technology in virtual environments. Approximately half of the respondents were unfamiliar with common virtualization monitoring technology – such as virtual tap and network packet brokers. This finding indicates an awareness gap about the technology itself and its ability to alleviate concerns around security, performance and compliance issues. Additionally, less than 25 percent have a central group responsible for collecting and monitoring data, which leads to a higher probability for a lack of consistent monitoring and can pose a huge potential for improper monitoring.
  • Virtualization technology adoption is likely to continue at its current pace for the next two years. Almost 75 percent of businesses are using virtualization technology in their production environment, and 65 percent intend to increase their use of virtualization technology in the next two years
  • Visibility and monitoring adoption is likely to continue growing at a consistent pace. The survey found that a large majority (82 percent) agree that monitoring is important. While 31 percent of respondents indicated they plan on maintaining current levels of monitoring capabilities, nearly 38 percent of businesses plan to increase their monitoring capabilities over the next two years.

“Virtualization can bring companies incredible benefits – whether in the form of cost or time saved,” said Fred Kost, Vice President of Security Solutions Marketing, Ixia. “At Ixia, we recognize the importance of this technology transformation, but also understand the risks that are involved. With our solutions, we are able to give organizations the necessary visibility so they are able to deploy virtualization technology with confidence.”

Download the full research report here.

Ixia's The State of Virtualization for Visibility Achitectures 2015

Thanks to Ixia for the article.

JDSU’s Network Instruments Named a Leader in Gartner Magic Quadrant for Second Consecutive Year

Ranking Based on Completeness of Vision and Ability to Execute in Network Performance Monitoring and Diagnostics Market

MILPITAS, Calif., Feb. 23, 2015 – Network Instruments®, a JDSU Performance Management Solution (NASDAQ: JDSU), has been positioned as a Leader in the new Gartner® Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD). In the Gartner, Inc. report, Network Instruments is recognized as a network performance management leader in its category for completeness of vision and ability to execute. The Gartner Magic Quadrant is considered one of the tech industry’s most influential evaluations of enterprise network solutions.

Organizations, specifically network managers, require the visibility and insight NPMD solutions provide to enable staff to anticipate performance constraints before they impact service levels of critical initiatives as well as to guarantee user productivity. As the report notes, “At an estimated $1.1 billion, the NPMD market is a fast-growing segment of the larger network management space ($1.9 billion in 2013), and overlaps slightly with aspects of the application performance monitoring (APM) space ($2.4 billion in 2013).”1

“Measuring and reporting on the performance of the network is crucial to ensuring that performance is maintained at an acceptable level,” wrote Gartner analyst Vivek Bhalla in Technology Overview for Network Performance Monitoring and Diagnostics, September 2014. “NPMD tools are also needed to highlight opportunities to enhance business value for internal end users and external customers through improved application delivery. Finally, NPMD tools enable improved capacity management, thereby lowering capital investment for networking equipment and services.”

“We believe being named a leader by Gartner for both years of the NPMD Magic Quadrant validates our continued commitment to helping enterprise customers thrive in rapidly changing network and cloud environments,” said Bruce Clark vice president & general manager at JDSU. “Enterprise network teams rely on us to provide in-depth understanding and analytics of the user experience from the network, application, and cloud to speed up problem resolution and optimize performance.”

Network Instruments current NPMD solution is the Observer® Performance Management Platform consisting of Observer Apex™, Observer Management Server, Observer GigaStor™, Observer Analyzer, Observer Infrastructure, Observer Matrix™ and Observer nTAPs™ products.

Disclaimer:
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1Gartner, Magic Quadrant for Network Performance Monitoring and Diagnostics, by Gartner Analysts Vivek Bhalla, Colin Fletcher and Gary Spivek, February 10, 2015.

Thanks to JDSU for the article. 

Infosim Global Webinar Day- A View Of Infosim® StableNet®

It´s good to be #1!

A view of Infosim® StableNet® based on criteria and results of the EMA Radar™ for Enterprise Network Availability Monitoring System (ENAMS): Q3 2014

Join John Olson, VP Technical Services Americas and Paul Krochenski, Regional Manager for a webinar on: “A view of Infosim® StableNet® based on criteria and results of the EMA Radar™ on ENAMS: Q3 2014”. This webinar will provide you with comprehensive insight into the strengths and advantages of Infosim® StableNet® based on the following criteria of the EMA Radar™ on ENAMS:

  • Deployment & Administration
  • Cost Advantage
  • Architecture & Integration
  • Functionality
  • Vendor Strength

A recording of this webinar as well as a free download of the “EMA Radar™ for ENAMS Summary and Infosim® Profile” will be available to all who register!

Register today and reserve your seat in the desired timezone:

AMERICAS, Thursday, September 25th, 3:00pm – 3:45pm EDT (GMT-4)
EUROPE, Thursday, September 25th, 3:00pm – 3:45pm CEST (GMT+2)
APAC, Thursday, September 25th, 4:00pm – 4:45pm SGT (GMT+8)

Thanks to Infosim for the article.

Surviving the Supersize

Top convenience store chain triumphs over complete network overhaul – what they learn saves up to$930k per day in lost sales.

Last year alone, total global convenience store sales topped off at $199 billion, not including fuel. Besides all the soda pop, pizza, and aspirin, today’s chains provide a host of other products and services, including ATM, money order, and wire transfers. Behind this 24/7 commitment is a lot of infrastructure, not to mention IT teams working to ensure reliable network and application delivery to power these quick and easy retail options.

Recently a U.S. chain with its headquarters in the Midwest region embarked on a super-sized, multi-year network overhaul that threatened to offline their business. Besides an upgrade to 10 Gb in their data center and disaster recovery sites, they set about virtualizing nearly 90 percent of their servers and implemented two new business-critical applications. Plus, since they had more bandwidth with 10 Gb, they added VoIP as well.

During the 10 Gb upgrade, it was crucial to guarantee visibility into the new applications and evolving technologies, while continuing to provide their end users with the same level of quality service they needed to do their jobs. “Monitoring data is all the same whether it’s on a gigabit or 10 Gb network,” said the LAN/WAN administrator for the convenience store chain. “You need to see it to troubleshoot it.”

To improve data center efficiencies and reduce costs, they first virtualized large portions of their infrastructure. “We wanted to eliminate our physical boxes,” the administrator said. “In addition to obvious infrastructure cost savings, it’s easier to operate in a virtual environment. This is certainly the case with disaster recovery where virtualized infrastructure is much easier to restore.”

Despite the numerous benefits of implementing the virtualized network however, the loss of visibility became immediately apparent. The network team couldn’t get comprehensive views into the communications between all of their virtual servers. This impacted their ability to provide answers to application designers, leaving them dependent upon the server team for information and troubleshooting. Adding this unnecessary step slowed down problem resolution and meant that the team could no longer rely on their existing monitoring solutions.

SOLUTIONS AT THE SOURCE

When you’re running a business with sales of over $3.4 billion per year, every day of downtime has the potential to impact nearly a million dollars in sales. Already familiar with Network Instruments Observer for network analysis, the IT team purchased GigaStor because of its award-winning forensics capabilities and precision-troubleshooting technology.

The appliance allowed network engineers to rewind network activity to the exact date and time that performance problems occurred, revealing the source with clarity. “Once our team saw how effectively Observer monitored current network activity, we knew we could benefit from the retrospective analysis features of GigaStor,” said the administrator. “It quickly became the key asset for resolving any problems we had. It reduced the number of times the network was blamed, and shaved hours to days off the problem resolution process. We could now show other IT teams everything that occurred, and prove the network was functioning properly.”

Network Instruments

To resolve the visibility issue, the network administrator created a SPAN off his vSwitch to mirror virtual communications and push these packets to GigaStor for in-depth analysis. The network team gained full visibility into all virtual networks – and regained network control because they no longer needed to rely on the server team to resolve network problems.

APPLICATION DENIED

Next on the upgrade agenda was the deployment of new applications. IBM Maximo®, an internally-hosted app for asset and inventory management tracks all in-store, IT, and engineering inventory. With the company’s retail locations relying on Maximo to ensure that store shelves stay stocked, it’s one of the organization’s most essential applications. However, as it was first deployed across the enterprise, the program inexplicably shut down.

The network team immediately encountered issues with the software locking up and what appeared to be a loss of network connectivity. Without accurate inventory management, shelves go empty, sales drop, and so do share prices.

Working with IBM Maximo support, the IT team had to prove that the network functioned properly and pinpoint the actual application error.

“We spent a lot of time monitoring users and servers, while simultaneously setting up tests to verify the network was solid and to diagnose the actual problem,” the network administrator said. “While troubleshooting with GigaStor, we figured out it was a Java® error within the software, but there wasn’t an actual error code for the program to relay the error message back to the end user or the developers. Instead, the process would stop and shut down after it timed out. Once we located this, we turned over the GigaStor capture data to the Maximo team. They were then able to confirm and address the application issue.”

Network Instruments

CLOUDY PERFORMANCE

In addition to troubleshooting issues with internally hosted applications like Maximo, the IT team continuously monitors cloud applications. Even though these apps are managed outside of the company’s IT department, user complaints and performance problems are first-reported to the network administrator.

This was the case when the new cloud-hosted self-service payroll application began locking up and freezing. Human Resources had automated its payroll functions by shifting to an online HR self-service application. Designated as a business-critical site, the payroll web service was a new resource that the network team needed to monitor vigilantly, because nobody can afford to miss a paycheck.

“Using pings and synthetic transactions, we were unable to get back the requested data from the site,” the administrator explained. “With GigaStor we verified that while our data was going out, we weren’t seeing the expected data coming back. We shared this information with the provider, and they went back and detected an IP misconfiguration issue on their side.”

MAKING UC BETTER

Finally, it was time for the unified communications (UC) portion of the upgrade. The company had implemented a new hybrid Nortel Networks™ PBX system as a part of a new VoIP installation. But as soon as the system was up and running, it began dropping calls. This resulted in an unacceptable disruption in communications.

They used GigaStor to monitor all voice communications for call quality, consistency, and issue resolution in collaboration with the vendor’s support team. GigaStor quickly proved its worth once again.

“We operate a predominantly Cisco® network and run a pure Nortel VoIP system,” the administrator said. “VoIP problems are often blamed on the network, since Cisco doesn’t support Nortel Systems.”

To troubleshoot, the administrator placed a TAP in the VoIP environment where GigaStor collected data – proving that while the network stayed up, the primary PBX switch wasn’t responding during the issue timeframe. Once the network itself was excluded as the problem source, Nortel used the information provided by GigaStor to resolve the simple problem of a faulty switch.

Network Instruments’ GigaStor appliance played a pivotal role for the network team in the ongoing VoIP rollout support and fine-tuning. “We now rely on GigaStor to supply us with the information needed to get Nortel back on track,” said the administrator.

When it comes to super-sized upgrades, using the right network performance monitoring solutions is essential for effective troubleshooting, and it can save time and money. “Since proving the information using GigaStor, everything’s been running problem free. It’s great,” added the administrator.

FUTURE

As network traffic, volume, and demands increase, the convenience store company is looking to expand the amount of data its GigaStor appliances can retain. “As we deploy new applications and technologies, GigaStor will be central to ensuring the successful performance of our network and company,” said the administrator.

Thanks to Network Instruments for the article.