The dust around VoIP is starting to settle and best practices for making it work properly are taking shape. Distinct camps have formed with particular approaches to assessing and monitoring networks, plus some interesting hybrid technologies. Instead of comparing apples to orangutans, now you can figure out which technologies fit your needs.
Network management companies have seen the VoIP market as an opportunity to expand and generate more revenue. Some have re-tooled their existing technologies to add the features and capabilities critical to voice. Others have somewhat trivially bolted MOS onto the GUI (based on who-knows-what) and simply renamed their products. In a few cases, unique technologies have been developed that provide alternatives to the tried-and-true and somewhat-worn approaches.
Regardless of whether you are looking for something old, new, borrowed or from IBM, the real question is whether it fits your application management requirements. Sometimes that's hard to tell — there is a tremendous amount of hype and the same verbiage gets used by everyone in an effort to appeal to a marketplace still getting its bearings.
So how can you tell the various offerings apart, never mind if they are appropriate for your needs?
Fortunately, with some disregard for the vendors striving to differentiate themselves, it is possible to lump most of the available solutions into two main categories. These categories may include some interesting variants but most approaches handily fit into them (with a few exceptions — more on those later).
The two main technological approaches to VoIP performance assessment and monitoring include:
1. Traffic simulation – active transmission of synthetic VoIP or VoIP-like traffic between two nodes or agents that are analyzed to generate characterizations of the network path
2. Traffic monitoring – passive monitoring of actual VoIP traffic at a single point within a network for instantaneous or historical analysis that characterizes the performance of flows at that point
Each approach, naturally, has advantages and disadvantages. And here's where you start to see where they might fit in your current network management process.Traffic Simulation
Software agents, or hardware devices, are placed at two or more points in the network and they run synthetic VoIP traffic between them. The test traffic may be for short, intensive periods (e.g. for pre-deployment assessment) or for longer-term, non-intrusive monitoring of performance. The generated traffic may simulate a range of call loads, codec types, and VoIP processes. It may also include interaction with VoIP devices like handsets or exchange servers.Pros:
* Very controlled testing methodologies
* Can exercise and monitor critical network paths in detail (e.g. WAN links)
* Can test extreme conditions (i.e. stress testing) before they happen
* Offers very precise MOS estimation in best/average/worst range
* Supports pre-deployment assessment well
* Typically requires an agent or hardware device at each node
* Incurs often undesirable deployment overhead and maintenance issues
* Not possible if you don't have access to a remote network
* Can be highly stressful to the network
* May not be viable on a live network
* Typically isn't effective on an "on-demand" basis for trouble-shooting
* Identifies poor MOS but typically cannot identify source of problem or localize on path
* Doesn't include the end-to-end path and may miss behaviors specific to last mile or particular devices
Hardware appliances (or software imbedded in network devices) are placed at critical points within the network, such as near a gateway, where they passively collect statistics from the packets that flow past that point. The statistics are analyzed locally or forwarded to a central analysis system. The analyses may be instantaneous and specific to a single VoIP call, or they may be aggregate performance measures, or they may be historical (showing performance over time and current performance relative to historical).Pros:
* May be very detailed (packet-by-packet, by device) or broadly general as required
* Can identify performance of a specific call(s) related to specific device(s) or path(s)
* Excellent for long-term monitoring of performance
* Precise MOS assessments
* Identifies time-of-day issues well
* Limited to a single point on the network (although distributed views can be generated by inference or by aggregating multiple point views)
* Depending on appliance, may not scale well to higher network capacities
* Requires time to develop historical record
* Not useful for pre-deployment
* Doesn't identify problems responsible for poor performance and localize on end-to-end path
* Deployment can be an issue, particularly for points on remote networks or on other networks (such as ISP)
There are of course variations on these two approaches that satisfy certain combinations of constraints better (or worse). As an alternative, some of the latest technologies hybridize the two approaches to optimize the advantages of each (for example, use of both agent-based and agent-less implementations). These approaches are harder to pin down and so perhaps deserve their own category. Each one can be measured for fit though according to their inherent advantages as well.
The best choice for your network operation may be found in the answers to the following:* Are your needs primarily related to pre-deployment assessment or long-term monitoring or a balance of both?
* Do you prefer to instrument your network or do you operate on-demand?
* Does the solution have to scale to very large networks and/or high capacities?
* Do you rely on networks that you don't own or do you have access to all critical paths and devices?
* Is your network configuration relatively static or highly dynamic?
* Does the solution have to work on live networks?
* Are you likely to need a solution that applies equally well to other application types or is VoIP your only concern?
* Do you arm your users and support staff with self-diagnosing capabilities or do you handle everything at the NOC?
Whether you choose traffic simulation or traffic monitoring or a clever hybrid of the two, the real question is whether the choice fits with your network and your IT work process. If it doesn't fit, it doesn't work – old or new, borrowed or blue.
Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen's and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2. www.apparentnetworks.com
Loki will be speaking at these upcoming conferences: