Application Performance Management and the Cloud

Cloud ComputingThe lack of innovation in traditional data centers has given way to developments in the cloud. It offers flexible user models such as Pay As You Go (PAYG) and Multi Tenancy services for e.g. Amazon Web Services (AWS). The downside is that as the cloud’s capacity increases (400k registrations AWS-Quadnet 2014) it is prone to more blackouts, security and compliance risks than we are led to believe.

The IT environment has become more complex around the cloud. The continued convergence of platforms and technologies has created additional challenges like Virtualization of legacy Data Center, Cloud Hosting, Software Defined Networks (SDN), remote access, Mobility (BYOD) and additional unstructured Big Data, a part of which is consumerism and encompasses User Generated Content (UGC) such as social media (Voice/Video).

The confluence of hardware and software over layered on an existing network architecture will create architectural complications in monitoring applications and network performance and visibility blind spots such as bandwidth growth across the vertical network between VM and physical servers, security and compliance protocols for remote and cloud environments etc.

The interplay of complexity e.g. in the area of data packet loss, leaks and packet segmentation in a virtualized environment will lead to delays of more than a few seconds in software performance synchronization. This can cause brownouts (lags, latency or degradation) and blackouts (crashes) which are detrimental to any commercial environment – such as retail web UI where a 2 second delay in web page uploads (slow DNS) is far too much.

The issues in a virtualized cloud lie in the Hypervisor as it changes IP addresses for VDI’s regularly. So the real measurement issue becomes getting insight into the core virtualized server environment.

When questioned, 79% of CTOs (Information Week study 2010) cited “software as very important” and with only 32% of APM service providers actually using specialized monitoring tools for the cloud. By not gaining deep insight into PaaS (Programming as a Service) and IaaS (Infrastructure as a Service), there is no visibility into the performance of application and networks. Therefore tracking degradation, latency and hub jitter becomes like finding a needle in the proverbial infrastructure haystack.

The debate surrounding cloud visibility and transparency is yet to be resolved partly because synthetic, probes, and passive agents only provide a mid-tier picture of the cloud. A passive virtual agent can be used to gain deep insight into the virtualized cloud environment. As the cloud market becomes more competitive, suppliers are being forced to disclose IaaS/PaaS performance data. Currently 59% of CTOs hold software in the cloud (Information Week 2011) without any specialized APM solution. Therefore one can only monitor the end user experience or resource used (CPU, memory etc.) to get some idea of application/network performance through the wire.

The imperative is in ensuring that your APM provider can cope with the intertwining complexities of the network, application, infrastructure and architecture. This means that a full arsenal of active and passive measuring tools need to be deployed for a pure play APM or a full MSP (Managed Service Provider) of end to end solutions that can set up, measure and translate outsourcing and SLAs into core critical measurable metrics. Furthermore, new software/technology deployments can be compared to established benchmarks allowing business decisions – such as application or hardware upgrades – to be made on current and relevant factual information i.e. business transaction, end user experience and network/application efficacy.

The convergence, consumerism, challenges and complexities based around the cloud have increased. So have the proficiencies of the leading APM providers in dealing with cloud complexity by using agentless data, collecting mechanisms such as injecting probes into middleware or using routers or switches embedded with NetFlow data analysers. The data is used to compile reports and dashboards on packet loss, latency and hub jitter etc. The generated reports allow comparisons of trends through semantic relationship testing, correlation and multivariate analysis with automated and advanced statistical techniques allowing CTOs and CIOs to make real time business decisions that provide a competitive advantage.

Thanks to APMDigest for the article.

BYOD Monitoring

A Corporate Conundrum

With possession being 9/10th of the law, the bring your own device (BYOD) trend creates a conundrum for enterprises. BYOD is the policy of allowing employees to bring personally owned mobile devices to their place of work and use them to access company resources such as email, file servers, and databases. It is also fairly common to mix personal and professional data on single device or across multiple mobile options.

BYOD is becoming increasingly prevalent in enterprises, as employees enjoy using their familiar technology of choice over corporate-mandated equipment. But since employees actually own the devices they’re using to perform corporate work and send or receive business-related communications, how can IT control the security and performance of corporate applications and assets?

There are more questions than answers currently as IT struggles to deal with the impacts of BYOD. The move away from standard, corporate-controlled endpoints is fraught with peril.

BYOD Challenges

  • With employee-owned devices, the amount of control IT has over the endpoints is a gray area. Can they monitor it, or does monitoring violate employee privacy? Can they take actions to protect the device without employee permission?
  • Privacy rights of the employee are extremely sticky when dealing with BYOD, especially in certain parts of Europe where employers are subject to strict privacy laws.
  • When an employee-owned device is lost or stolen, does IT have the right to remotely wipe the device? What about personal data the employee has on the device?
  • With BYOD, instead of IT worrying about one device per employee, a single employee might use 2-3 or more devices to access corporate resources.
  • It should be assumed that BYOD endpoints are security risks, due to a lack of corporate control over the devices.
  • BYOD users expect the speed and performance they are accustomed to on their local desktops, so IT planning for sufficient capacity is key. SLAs must be defined for the BYOD infrastructure, as well as a centralized management capability.
  • A successful BYOD strategy must also take compliance under consideration and build in the auditing and reporting capabilities that are crucial to compliance.

The Ixia BYOD Solution

The Ixia BYOD solution is an essential element of a BYOD strategy. We help enterprises planning or already maintaining BYOD by remediating security and performance impacts on corporate networks due to uncontrolled endpoints.

With Ixia’s BYOD solution, you can monitor the corporate network actively, with no sacrifice of network access for your security and performance tools. Our BYOD line:

  • Protects corporate IT assets responsibly
  • Aggregates, filters and replicates traffic so all security tools get the right data
  • Increases monitoring tool performance and improves tool accuracy
  • Speeds incident remediation, delivering granular access control to network data and automated responses for adaptive monitoring when anomalous behavior is detected
  • Reduces exposure of sensitive data with filtering and stripping capabilities

Ixia enables the real-time monitoring to address critical business needs at gigabit speeds, while providing insights and analysis on a sub-minute level. We provide the application-specific intelligence that’s critical to timely root cause analysis for BYOD security—including identification of actual user names, individual VoIP calls, and deep visibility of email traffic. With a near real-time and historical view of key performance indicators (KPIs)—including traffic volume, top talkers, application and network latency, and application distribution—IT can monitor bandwidth usage and acquire needed information to quickly resolve application performance issues. IT can also perform capacity planning and trend analysis to see how the BYOD program affects the baseline of network resources.

Related products

Ixia's Net Tool Optimizers Net Optics Network Taps
Net Tool Optimizers
Out-of-band traffic aggregation,
filtering, dedup, load balancing
Net Optics Network Taps
Passive network access
for security and monitoring tools

Thanks to Ixia for the article.

Application Intelligence: THE Driving Force In Network Visibility

Ixia's Application Intelligence Network VisibilityBusiness networks continue to respond to user and business demands, such as, access to more data, bring your own device (BYOD), virtualisation and the continued growth of the Internet of Things.

Historically much of the traffic that runs through these networks has been known to network administrators but access to application and user data remains lacking.

Application intelligence – the ability to monitor application flows based on application type – provides the insight that is desperately required to get more visibility into what is happening on networks.

Application intelligence can dynamically identify all applications running on a network. In addition, well-designed application intelligence solutions generate a wealth of information, such as geo-location data, network user types, device types, operating systems and browser types.

The key to success is integrating application intelligence in to enterprises purpose-built monitoring tools without overwhelming existing processes. Offloading the packet processing required to generate this application intelligence to dedicated hardware visibility solutions enables the monitoring tools to work better, and deliver better insight into network anomalies, problems and concerns.

Network visibility: the paradigm shift

The ubiquitousness of mobile computing in everyday life now means that the use of networks, network access and applications over networks has exponentially risen. The huge challenge facing network managers and operators is how to effectively monitor the performance, incidents and problems that come with an increase in applications and services traveling across networks.

In addition, today’s network security threats are big business – motivated by financial gain and much more sophisticated, prevalent and insidious than in the past. There are now whole communities dedicated to the sole purpose of cracking network security, many of which have gained international notoriety.

IT security professionals are struggling to keep up with the ever-escalating war between those trying to break in, and those trying to keep them out. As a result, organisations need to increase the effectiveness of network monitoring and network security by using the following application intelligence controls.

Profile the network

A network profile is an inventory of all the assets and services using the network. As the profile changes over time, network operators and defenders can monitor for emerging concerns. Most modern data-centre applications require great communication performance.

However, often these applications experience low throughput and high delay between the data centre, users and back-end servers that perform other operations. Application intelligence can help to profile a network by identifying all applications, performance issues across the network and how application traffic affects overall network performance.

Ixia's Application Intelligence Network VisibilityNetwork spikes

One of the most common things that can kill network performance is a huge spike in traffic that overwhelms resources. These types of events can slow down or even disable an otherwise functioning network.

With application intelligence, monitoring tools can observe sudden spikes in a specific type of application traffic – and then take action to either mitigate the effect or alert the proper people that can address the issue. With this knowledge, monitoring systems and IT, enterprises can prevent localised or global outages, especially in mobile service provider environments.

Ixia's Application Intelligence Network VisibilityBYOD effects and issues

One of the biggest issues facing network operators in the age of mobile devices is the BYOD phenomenon. Unregulated devices suddenly linked to your network and using it in ways that are unauthorised, or just unexpected, can wreak havoc on network performance.

Application intelligence allows you to use operating system information troubleshoot and predict BYOD effects. By collecting user information about the browser types used for each application, business can understand the impact of devices and trends in user behaviour. Organisations can capture rich user and behavioural data about the applications that are running, and determine how, when, and where users are employing them.

Capacity planning

Planning for your network capacity can be the difference between a smoothly functioning network and a disastrous mess of a network. Application intelligence can solve this problem by providing the exact data you need – who is using the network, what applications are being run, and from what location they are being accessed.

Good application intelligence also provides geo-location of application traffic to see application bandwidth and data distribution across the network. With the right tool, geo-location information allows identification beyond country, county and town, right down to neighborhood locations.

Ixia's Application Intelligence Network VisibilityFilter for specific information

The biggest variable in a network are the users employing it. They are the ones that create the demand for resources, the traffic flows and the security threats that plague network operators on a daily basis.

Application intelligence allows network operators to audit for security policy infractions and verify network user activity in following set policies. Application intelligence also allows for protection against known bad websites.

Avoid the application tsunami

Getting an accurate picture of what is happening in the network in real-time, and understanding exactly what is causing it, allows a network operator to turn a potential network disaster into a mere nuisance.

Application intelligence allows a savvy network operator to prepare for network “tsunamis” from specific applications or events – setting up alerts or actions ahead of time.

The real role of application intelligence

More and more people are using networks for more and more functions – networking is a deeply interwoven part of our everyday life. However, with this use, comes increased demands and needs. Application intelligence helps you always get the right alert at the right time, with no alert storms that leave you guessing about the real problem.

Today, network operators must monitor all aspects of their networks to maintain functionality. That includes monitoring applications along with the critical parts of application delivery, for example, servers and services that are used across the network.

Recognising and reacting to easily identifiable, trouble-making applications can mean the difference between functioning and failing. Operators must proactively head off application issues with careful capacity planning.

Roark Pollock is vice president of visibility solutions at Ixia

Thanks to ITProPortal for the article.

State Of The Network: Analyst Jim Frey’s View/ Q&A

JDSU’s Network Instruments business released its seventh annual “State of the Network Global Study.” Of the findings – after polling 241 network professionals – a provoking theme emerged: 80 percent of respondents viewed software defined networks (SDN) as unimportant or were just going to “ride out the hype” – saying it is “Like a Road Trip Without a Map.”

frey2A leading authority in the enterprise network management space with a lot of experience covering this sector is Jim Frey, vice president at Enterprise Management Associated (EMA), which conducts IT & data management research, industry analysis and consulting.

Inspired by the recent study, I reached out to Jim – who has briefed regularly with the Network Instruments team – for his take on the state of network today as he sees it, and his view of where the market is headed.

Q: Jim, what do you feel are the top priorities – opportunities and/or challenges – for the enterprise sector at a time when SDN and Big Data are all the rage?

Looking at where SDN is in its maturation, the timing is still very early and it is not having a big impact yet. Big data – the collection, analysis and sharing of data by network management systems – is rapidly growing in importance as a company asset. Here is what I see as top priorities and megatrends as SDN and big data emerge within larger enterprises.

First, cloud and virtualization adoption rates are very high and topics to keep an eye on. The cloud is enabled by virtualization and that dynamic creates added challenges for performance monitoring. The ability to understand how services perform when deployed in the cloud, especially from an end-user perspective, makes visibility a critical – and challenging – aspect of effective resource monitoring and management.

In addition, making sense of the large amounts of network monitoring data and what can be done with it are key. In particular, we see network teams taking a more proactive approach to handle performance challenges as they find hidden trends and relationships between application and infrastructure performance using the latest big data analysis techniques.

Lastly, I’d say another big trend taking hold is the continuous market transition between separate siloes or functional areas in IT operations, moving towards more converged operations and IT organizations. This has some direct impact on choices of network management tools and how they get used over time.

Q: The “State of the Network” study spotlighted that a fair amount of network managers and engineers agree that SDN is a “Road Trip Without a Map”? Were you surprised to see that finding?

This is very consistent with the conclusion that I have had in my own research. I’ve found that only 20 percent have taken part in any degree of SDN deployments. And, less than 10 percent are in full deployment. The remaining are at the early stages of research, testing, or evaluation with many network managers still trying to figure out where and how to use it. They are asking: “Is there enough pain in my current network to warrant an alternative like SDN?” Most organizations – most mainstream enterprises – would answer, “No, not enough to warrant embracing these relatively immature technologies at this time.”

Important to note is the fact that there are two main types of SDN. One type is the overlay network, which is purely software/virtual; the other is OpenFlow-based, which we call an underlay SDN. Of the two, the virtual overlay is a more natural extension of existing virtual networks, as it uses tunneling and the existing network, so no new physical infrastructure is required. However, because it is typically managed by system administrators with little input from the network team, there are two areas of potential challenges. The first is to address network capacity planning and the second to ensure visibility into the encapsulated, tunneled traffic for satisfactory service delivery monitoring.

Q: A big takeaway seems to be that now more than ever service providers and enterprises must be equipped with technology that provides key visibility and troubleshooting. How important are these capabilities for enterprise and service providers to deliver quality services, build reliable networks?

Absolutely – as you can tell from prior answers, visibility has never been more important. In the current environment there are more layers of technology, more dynamic aspects that come with virtualization, and with that, the more you must understand what’s going on and be able to piece together the full story. For example, how is that app being delivered? From where it’s being hosted to the consumer, what’s happening along the way to make it work? And, how is the quality and efficiency, what is experience of the consumer or end user?

The picture is getting a whole lot more complex – all of the trends we have addressed – virtualization, SDN, big data, cloud…they all drive a heightened need for deep and accurate visibility.

Thanks to Network Instruments for the article.

Cloudy With a Chance of Dropped Packets

The topic of network visibility is still hot due in part to increasing data bandwidth requirements and network processing speeds. For instance, in a recent survey by Enterprise Management Associates (Network Visibility Controllers – Best Practices for Mainstreaming Monitoring Fabrics), 67% of respondents have or will convert to 10GE for their data centers within a year, 52% have or plan to migrate to 40GE within the year, and 40% have or plan to migrate to 100GE within the year.

Ixia Network MonitoringOne of the negative results (and it’s a big one) of increasing data network bandwidth is that it creates a network monitoring tool overload for the IT department. Because of investment protection concerns, many businesses are still using monitoring and security tools that were built for 1GE. When these tools are used in higher speed networks, they quickly become overloaded and useless. Even if cost were not the problem, there is a lower availability of 40GE and 100GE tools in the marketplace. Altogether, this creates a very dismal forecast with limited visibility and a strong chance that your network will drop packets before they are accurately processed by your monitoring and security information and event management (SIEM) tools.

With technology needs increasing and tight budgets persisting, there are no signs of the situation letting up any time soon. VoIP, video, cloud, virtualization, BYOD, new wireless initiatives, and other key business applications are saturating critical network services. These new evolving network technologies are creating discontinuities within the data network. At the same time, users and business are becoming increasingly reliant on 99.999% data network uptime and cannot tolerate network problems for mission critical services.

This is where the Ixia Anue NTO network monitoring switch can really shine for businesses. The patented dynamic filtering capability in our monitoring switch can give companies exactly what they want (and need). The advanced filtering capability means that the NTO can accept 10GE, 40GE, or even 100GE speeds, and throttle/filter the content so that your downstream monitoring and security tools don’t get overloaded (even if they are 1GE tools). All the data needed is sent to the right tools at the right time.

Thanks to Ixia for the article. 

Don’t Deprive Your Mobile Workers Of UC

Unified Communications BYODThe bring your own device movement is evolving rapidly, and companies that neglect to optimize their enterprise mobility strategies may fail to keep pace with forward-thinking competitors. The most effective BYOD programs provide employees with the tools required to enhance productivity, improve communications and collaborate with coworkers in disparate areas. Companies can benefit tremendously by enabling a mobile workforce, but they must also adapt their strategies to accommodate BYOD participants’ needs. The adoption of a unified communications program is a good place to start.

Decision-makers are realizing the need for UC

It seems that many IT leaders are now recognizing the importance of a UC solution, albeit being rather slow to do so. A new Evolve IP study, which surveyed 974 IT and executive decision-makers, found that 84 percent of organizations that do not currently have a UC strategy are considering or planning to implement one within the next one to three years. The study also examined the link between UC and BYOD, finding that 60 percent of companies that are leveraging UC services also have a work-from-home program.

Of the various UC services available, video conferencing seems to be a particularly hot commodity. The study revealed that 72 percent of organizations are using some form of video, whether it be a large conferencing system or a one-on-one desktop solution. Additionally, audio and web conferencing was found to be most requested UC feature, with unified messaging and instant messaging and presence ranking second and third, respectively.

Meanwhile, a recent “Technology Trends 2014” study from Computer Economics identified UC as one of six “low-risk, high-reward” enterprise technologies, according to Redmond Magazine. Computer Economics’ John Longwell asserted that UC is becoming a more prominent feature in companies’ infrastructures. This should come as no surprise given the growth of BYOD and the need for firms to maintain control over enterprise mobility.

Why your BYOD program needs UC

By adopting UC solutions, organizations are able to mitigate many of the risks associated with BYOD. For instance, UC services like the VoIP system help businesses ensure that their ever-expanding mobile workforce stays connected. With VoIP features like voicemail to email, employees are able to integrate multiple communications systems into one cohesive interface. A comprehensive UC suite allows businesses to improve response times, increase agility and maximize the benefits of their BYOD programs.

Thanks to TEO Technologies for the article.

Network Instruments Seventh Annual State of the Network Global Study

NI-state-of-the-network-2014Network Instruments, a business unit of JDSU (NASDAQ: JDSU), has released the results of its Seventh Annual State of the Network Global Study today. Highlighted in the survey taken by 241 network professionals is continuing uncertainty surrounding the emerging Software-Defined Networks (SDN), which many network administrators view as a “road trip without a map.” While 12 percent of network managers and engineers regard SDN as critical, the rest seem divided as to whether it is important at all, with more than 80 percent of respondents viewing SDN as “unimportant” or just wanting to “ride out the hype.”

What’s trending? The results suggest network teams are investigating and adopting emerging technologies like SDN and 40 Gigabit (Gb) Ethernet networks on a limited basis, while technologies such as Unified Communications (UC) and Bring Your Own Device (BYOD) are approaching the threshold of mainstream adoption, tipping in at more than 50 percent implementation by year’s end.

Study Highlights:

  • SDN is emerging in the Enterprise: One in five of the organizations surveyed plan to have deployed SDN by the end of 2014.
  • Network experts are having trouble defining SDN: Nearly 40 percent of respondents consider SDN to be undefinable.
  • Network bandwidth continues to surge: By 2015, 25 percent of organizations expect their bandwidth demand to grow between 51 percent to 100 percent, while 12 percent predict growth of more than 100 percent.
  • Growing 40 Gb Adoption: By 2015, 25 percent will have implemented 40 Gb data rates in their enterprise networks.
  • Unified Communications have gone mainstream: UC apps have gone mainstream, including videoconferencing, with 63 percent of respondents having implemented it in 2014 compared to only 25 percent in 2009.
  • User experiences with Unified Communications are unclear: More than 50 percent of respondents indicated a lack of visibility into the user experience as their top UC management challenge.
  • Application Angst: 74 percent cited their top application troubleshooting challenge is how to determine the root-cause of network performance problems.

“As with any emerging technology, IT management is grappling over the definition of SDN, as well as its benefit and importance to the organization,” said Brad Reinboldt, manager of Product Marketing for Network Instruments. “As network professionals come to terms with SDN and its relevance, they continue to juggle multiple major initiatives including Big Data, UC, BYOD and 40 Gb deployment. As IT continues to roll out bandwidth-hungry applications to keep pace with the needs of a global, mobile workforce, they lag in the visibility and troubleshooting technologies needed to monitor their burgeoning networks.”

Software Defined Networks

With a wide range of definitions for SDN, the top drivers behind SDN adoption were the need to improve the network’s ability to dynamically adapt to changing business demands (48 percent) and to deliver new services faster (40 percent). Others indicated lowering operating expenses, decreasing capital expenses, improving the ability to provision network infrastructure, and designing more realistic network infrastructures as reasons behind SDN deployment.

Unified Communications

Significant adoption of the many communication applications making up UC have occurred during the past five years as 71 percent have now deployed voice-over-IP (VoIP), compared to 45 percent in 2009. Likewise, 63 percent have rolled out videoconferencing compared to 27 percent in 2009. Nearly half now use instant messaging compared to only 27 percent in 2009.

While UC applications can now be considered mainstream, adoption of application monitoring lags behind. More than 50 percent of the participants indicated that lack of visibility into user experience was their top UC management challenge. This was followed by difficulties assessing bandwidth usage at 39 percent, and an inability to assess UC deployment impact by 38 percent of respondents.

Application Angst

In terms of organizational bandwidth demand, three-quarters of respondents expect increases of up to 50 percent in 2014. In 2015, this bandwidth surge continues unabated with one-quarter of respondents expecting demand to increase between 51 percent to 100 percent, and 12 percent of respondents forecasting their organization’s demand to exceed 100 percent. As applications and networks grow in complexity, the ability to resolve performance problems worsens: 74 percent indicated that their largest application troubleshooting challenge was isolating the source of the problem, a 6 percent rise over last year’s results.

State of the Network Global Study Background

The State of the Network Global Study has been conducted annually for seven years. This year, Network Instruments engaged 241 network professionals to understand and quantify new technology adoption trends and daily IT challenges. Respondents were asked, via a third-party web portal, to answer a series of questions on the impact, challenges, and benefits of SDN, UC, Big Data and Application Performance Management. The results were based on responses by network engineers, IT directors, and CIOs from around the globe. Responses were collected from January 10, 2014 to March 7, 2014.

Thanks to Network Instruments for the article.

How To Ensure That Your VoIP Setup Delivers ROI

chromebox-for-meetings-578-80

The principles of managing VoIP performance

The use of VoIP in the corporate world is growing rapidly, driven by a combination of increasingly mature technology and a desire to reduce costs.

A single network infrastructure should enable organizations to reduce capital expenditure and create a more homogeneous environment that is easier to maintain, monitor and manage.

However, using the network to transport voice as well as data naturally reduces the amount of traffic it can support.

Moving to VoIP gives excellent results when executed properly, but requires careful planning and the right tools to avoid poor performance and reduced efficiency. If call quality is poor, users simply won’t use it.

Planning a VoIP Implementation

There are two key points to consider when planning a VoIP implementation: the increasing capacity demands, and the nature of packetized voice traffic, which affects both voice quality and bandwidth use.

All packets are subject to latency, jitter and loss as they traverse the network. Data packets use TCP, which is connection oriented. If there is a delay, or receipt is not acknowledged, the protocol times out and data is resent, so events go unnoticed or have minimal impact.

In contrast, VoIP utilizes UDP, which is inherently connectionless. If a packet is lost, or delivery is taking too long, the sender has no mechanism to resend or adjust the rate at which data is sent.

Packet loss of more than 5% will start to affect voice quality. As a result latency, jitter and packet loss can have a devastating effect on call quality, rendering conversations unintelligible.

VoIP’s inability to adjust for network conditions also means that it uses whatever bandwidth is available. TCP can and will adjust, so if VoIP is using a large proportion of the bandwidth TCP traffic will see low availability and applications will slow down.

Adding a significant number of VoIP users can impact utilization of network segments, reducing both voice call quality and the speed of standard TCP applications.

Quality of Service (QoS)

To solve this and preserve the integrity of voice calls requires two different classes of service. Most networks use QoS technologies to protect and prioritize VoIP traffic by tagging it at the device level with a queue marker (a Differentiated Service Code Point or DSCP) and setting parameters for how devices in their network treat it.

It’s usually allowed top priority in being forwarded through the device, as well as some type of rate limit to ensure data applications continue to perform at the levels users expect.

Engineers can also analyze management information to help them adjust the interaction of VoIP with the infrastructure.

They can identify issues such as a lack of bandwidth in specific sectors, determine whether applications such as file sharing or streaming media are impeding VoIP performance, and how traffic should be shaped to prioritize the most business critical applications.

Thanks to TechRadar for the article.

From BYOB to WYOD: How Wearables Will Transform Business

WYODOver the past few years, organizations worldwide were forced to deal with an IT “problem” referred to as BYOD (Bring Your Own Device). It started with smart phones, and now it’s occurring with other devices as well.

Here’s what happened: Most large organizations, as well as midsize and even smaller ones, required their people to have a Blackberry. By combining a cell phone with a secure, enterprise level email system, Blackberry changed how we use cell phones and took mobile working to a new level. Unlike the Blackberry, when the Apple iPhone came out, it transformed how we use our mobile phone by making it a handheld multimedia computer. As useful, inexpensive, and easy to install mobile apps took off, it didn’t take long for employees at all levels to discover the power of this transformative tool. It soon became common to see people with two phones: the Blackberry because they had to, and the iPhone because they wanted to. As soon as Google, Microsoft, Samsung, and a host of others got into the smart phone business, and the smart tablet business, BYOD became unstoppable.

Because of the rapid rise of the BYOD trend, the vast majority of companies found themselves reacting and putting out fires. Before BYOD, IT could control the use of technology. They’d issue the device and it would be locked down with the corporate IT firewall. But with more and more employees bringing in their own devices, IT’s efforts to keep a secure environment became almost impossible amongst the ever increasing number of devices they had little control over. In short, BYOD created a major IT problem, not just in the United States, but also in Europe and Asia.

Today, we have a new impending IT crisis, as well as an opportunity, that’s very predictable. Soon we will all be dealing with WYOD (Wear Your Own Device). Consider this: It’s estimated that one million wearable devices will ship by the end of 2014. It’s also estimated that there will be 300 million shipped by 2018, and I think that number will be far greater. That’s a lot of wearable devices. If people started bringing their own portable devices into the office and wrecking havoc on IT, you can bet it’s going to start happening with wearable technology, such as Google Glass, smart watches, and other types of computing devices you can wear, including the screenless smart phone I’ve written about in the past.

Therefore, I’m suggesting, from an organizational standpoint that includes business, government, and education, that everyone develop a WYOD strategy immediately—now—before the predictable problem hits. So instead of putting out fires like we did when the BYOD crisis hit, we can turn the impending WYOD crisis into an opportunity. So let’s develop guidelines right now and take what I call a “preactive” approach, which means we’re taking pre-action to a future known event.

To start looking at WYOD as an opportunity rather than a crisis, let’s start asking some new questions, such as: “How can we use smart watches with our sales team?” “How can we use Google Glass with all of our people who need to access data while standing up and moving around, like maintenance and customer service?” “How can we use the new wearable technologies to do things that we couldn’t do before, to increase productivity and efficiency?” “What wearable technology purchasing guidelines should we send to our employees?”

Wearable devices are here and they’ll only gain popularity as time goes on. Therefore, get ahead of the curve and harness the opportunities. As I always say, if it can be done, it will be done. The only question is, who will do it first? Now’s your chance to harness the opportunities before your competitors do.

Thanks to Daniel Burrus of Burrus Research for the article.

5 strategies for post-holiday BYOD problems

Employees’ new mobile devices could cause the age-old security versus productivity debate to resurface

Christmas is fast approaching. Now, and after the office is back to normal after the first of the year, employees are going to return with several shiny new gadgets, along with the expectation that they’ll “just work” in the corporate environment. Security will be a distant afterthought, because it’s still viewed as a process that hinders productivity.

The back and forth between security helping or hurting productivity is a battle that has existed before the mobile device boom, and it will exist long after the next big technological thing arrives. But the fact remains security is an essential aspect to operations.

Analysts from Frost & Sullivan have estimated that mobile endpoint protection market will reach one billion dollars in earned revenue by 2017, a rather large number given that last year the market was worth about $430 million. The reason for the large projection is simple; mobile is the new endpoint, and everyone has one.

Laptops, tablets, and smartphones enable employees to work anywhere, at anytime, so organizations have had to adapt in order to protect them and the sensitive data they access. However, Frost & Sullivan believe that businesses severely underestimate the risk presented by mobile devices.

CSO recently spoke to Jonathan Dale, the Director of Marketing at Fiberlink, a mobile management and security firm recently acquired by IBM. He offered some suggestions for IT and Security teams that are gearing up to deal with the influx of new devices that’ll soon appear on the network.

Educate:

It’s going to happen. According to the Consumer Electronics Association, tablets, laptops, and smartphones are the top gifts this holiday season for adults. Those gifts will show up on the network the moment that employees return from holiday break. So it makes sense to remind employees of corporate policies and rules that govern mobile device and their usage as it relates to work.

If the company has a mobile management product in place, Dale said, make sure to send employees enrollment instructions before they leave for the holidays and after they return.

“It doesn’t matter if it’s a new Kindle or one of the latest tablets from Samsung or Apple. The business side of getting a new device starts with enrollment. Make sure it’s clear that the link is for all new devices employees plan to use to access to corporate resources,” he said.

Do a policy check:

Now would be a good time to ensure that personal device usage policies, as well as policies governing devices issued by the business, are not only current, but also meet the organization’s security needs.

“Are you protecting the important stuff properly? Are your passcode policies applied properly? Are you forcing encryption on Android devices that support it and blocking the ones that dont? Ensure the policies are where they should be,” Dale said.

FAQ the basics:

Again, you can’t stop it: Personal devices will arrive after the holidays. Make things easier on the helpdesk, and when the policy reminders are sent, include the steps needed to enable Wi-Fi for iOS and Android, and basics like the SSID information or help on connecting automatically.

This, in theory, will cut down on the number of helpdesk requests related to making things “just work.”

Prep supported apps:

“What better way to welcome the arrival of a new device than with a supported list of apps. Once an employee enrolls a device, and IT can automatically push them all the corporate apps they need. To wow employees even further, place a set of games and public apps in their supported app store,” Dale said.

Privacy:

Finally, make sure that when the policy reminders go out, employees are clear on what parts of the device the company will have access to and what can be done with that access.

“Privacy is a major part of a successful BYOD program. There are several options so, know what abilities you as IT have and figure out what works best for your company culture or CEO,” Dale added.

Thanks to NetworkWorld for the article.