Empower Your Security Monitoring with Deep Packet Inspection

director

Now you can block network intrusion, data loss prevention and other security threats more effectively than ever before.  Deep Packet Inspection (DPI) is a key feature of Net Optics innovative Director Pro Solution that lets you pre-screen and pre-filter traffic with unrivaled precision and policy granularity.  Its vital to enterprises, cloud computing providers and telecommunications operators.  DPI supports diverse applications including;

  • Advanced network management
  • User services
  • Security Functions
  • Internet Data Mining
  • Legal eavesdropping
  • Compliance and Governance

Search the whole Packet and take instant action

As a packet passes a DPI point, it is scanned for protocol non-compliance, intrusions, or any predefined criteria, so you can act instantly to resolve any issues.  Unlike shallow packet inspection, which checks only the packet header, DPI lets you search anywhere in the packet based on rules that you set up.  Setting up rules with DPI also reduces the load on analysis devices

DPI is essential to Intrusion Prevention Systems (IPSs), application firewalls, and data loss prevention devices.  The ability to pre-filter data enables Director Pro to optimize the performance of all security and monitoring tools.  DPI’s sophisticated function help make your entire security strategy more efficient and productive.

Advertisements

Where has all the Time Gone

Accurate time of day is important in communications centers and public safety answering points (PSAPs) for time stamping events.  Without accurate time different equipment or systems displays record different times.  There is no correlation or certainty of start/end times for events and tasks and you are left vulnerable to litigation.   The province of Quebec as part of the Civil Protection Act has recently required all 9-1-1 emergency centers ensure that the telephony components and computer systems be synchronized with the official time of the National Research Council of Canada at all times.

Time synchronization with Legal Traceable Time of all your equipment in an emergency call centre (CAD, Radio, VoIP- ANI/ALI, Voice Recorders, etc.) is critical to improve quality of operations, response times and documentation for legal proceedings. This is why the trade organization, National Emergency Number Association (NENA) has determined requirements for synchronized time through the use of master clocks.

The most accurate, secure and reliable method of synchronizing a local installation is through the GPS satellite signals. GPS reception is available everywhere on earth and its time accuracy is traceable to national metrology institutes such as the National Research Council (NRC) and National Institute of Standards and Technology (NIST). Use of a GPS timing signal is the first step to synchronizing the components of your telephony, radio, and computer systems with the official time of the NRC.

When reviewing components for GPS time consider these 5 points

  1. An integral GPS receiver
  2. Network Time Server
  3. Timing outputs for other systems and devices in a communications center
  4. Highly secure web browser interface.
  5. A backup in case of signal loss or the  GPS is disabled or unavailable

Learn more about systems to deliver Legal-Traceable Time  to your center at: http://www.telnetnetworks.ca/en/telnet-networks-partners/partner-spectracom/nctsmc.html

Rogers Increases Profit by 21%

Rogers, Canada’s largest wireless carrier  just reported a 21% increase in second quarter profits .  They earned 464 Million in last quarter vs 412 Million for the same period a year ago.  The Financial Post reported that they gained 98,000 new customers, which is 50% lower than the previous year, but is not surprising given the fact that the market is slowing down, and the incumbents Bell and Telus split the remaining share.  It would seem that new wireless entrants Wind Mobile, Mobilicity, and Public Mobile into the market have not hurt Rogers at this point in time.

The biggest area of growth was the continued growth of Smart Phones, which allowed wireless revenue to grew by 39% and these associated data plans make up 35% of Rogers revenue.

How Do You Know Your Customer Facing Systems are Performing?

Today’s telecommunications and contact center infrastructures have become extremely complex.   As you know the design and the reliability of these infrastructures is critical to delivering the level of customer service your client demands.

So whether you are installing a new contact center solution or upgrading an existing solution, you want to make sure your efforts deliver the best possible customer experience and ROI.   If the solution does not work as designed and customers do not use it as expected your customer satisfaction and cost savings go out the window.

This white paper – Testing Very Large Contact Center Systems discusses the steps you can take to be confident that all the integrated elements work together, and go live with confidence.   Whether you are a small center or a large center you can make use of these ideas.

Below are a few tools that can help you verify your solutions from the customer perspective.

StressTest will show you how your new applications will perform before it launches.

You can gain the customer perspective with HeartBeat

Understanding Cloud Computing

Open any popular IT publication and you’re bound to see countless articles on how cloud computing is going to change everything from application management to climate change.  With all these claims, it can be hard to understand what cloud computing really is and how it will impact network management.

In this article, published by Network Instruments ,we’ll cut through the hype and provide practical advice for monitoring the cloud.

What’s Old is New

The first thing to understand is that we’ve been here before. Terms like “private clouds” refer to existing practices of running applications over the Internet and across WAN connections. Going back even further, you might recall when time-sharing was the prominent model of computing. By allowing a large number of users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, and made it possible for individuals and organizations to use a computer without owning one.

The Internet has brought the concept of time-sharing back into popularity. Expensive corporate server farms and private data centers can host thousands of customers all sharing the same common resources. “New” business models, such as Software as a Service have become popular due to the cost/benefit ratio they provide customers.

Cloud Computing Platforms

In addition to the earlier mentioned term of private clouds, there are three types of public cloud computing:

Software as a Service (SaaS): Rather than buying and internally hosting software, organizations can lease the application as a service through a provider. The application is hosted, managed, and upgraded by the provider, and customers access it via an Internet connection. Examples of SaaS include Salesforce.com and Google Apps.

Platform as a Service (PaaS): Using PaaS, customers lease a platform where they develop their code, and the provider uploads and presents it on the web. Essentially, the organization is leasing raw computer power and the environment to develop applications in a quick and cost-effective manner. Examples of PaaS include Microsoft Azure and Salesforce.com’s Force.com.

Infrastructure as a Platform (IaaS): IaaS is a pay-as-you-go model for leasing the computing infrastructure for operating intense applications. IaaS provides extremely flexible scalability for handling sudden application demand shifts. For applications that require a lot of bandwidth and resources but are only used for a short time, this can be an excellent option. Examples of IaaS include Amazon EC2 and Rackspace.

Loss of Control
When implementing any type of cloud service, an organization is taking an application that was internally hosted and introducing new players–the ISP and cloud provider–who now control certain aspects of the service delivery and performance. When the application suffers any type of performance delay or users complain about slow application response, how does the network team deal with these new partners?  Be aware of the types of performance reporting the cloud provider offers, and the ability of your company’s monitoring tools to integrate with these reports. Also, when dealing with an IaaS or PaaS provider, service level agreements (SLAs) become key to having any recourse if you’re plagued by performance issues.  In the case of many SaaS vendors, effective SLAs may not be an option.

Loss of Visibility
In addition to giving up control over application performance, organizations need to evaluate whether visibility and analysis of performance will suffer by moving to the cloud.  Are current tools able to provide similar monitoring to connections moving across the WAN and Internet?  In monitoring cloud-based activities using the Observer platform,  provides key metrics for managing performance across the WAN/Internet, SLA monitoring and enforcement, ensuring application precedence, and pinpointing the cause of delay to the internal network, ISP, or cloud provider.

Although cloud computing might seem daunting, the technologies are largely based upon existing network concepts. Migrating to the cloud requires a shift in mindset, but uses many of the same solutions you have for managing your internal network and infrastructure. As the next step in readying your network and engineering team for the migration, read the Network Instruments’ Tech Advisory on Preparing for the Cloud.