Problem solve Get help with specific problems with your technologies, process and projects.

Protocol interoperability and VoIP performance

Learn how protocols interact and affect performance in Voice over IP (VoIP) installations.

Voice over IP (VoIP) is a complex bit of technology. It's defined, as are all technologies, by protocols and standards....

This tip, excerpted from InformIT, discusses the protocols that are used in VoIP and offers some hints for improving performance, depending on the protocols in use.

Bill Douskalis is the author of Putting VoIP to Work: Softswitch Network Design and Testing.

Engineers have often asked, "Do we really need all these VoIP signaling protocols to replace the loop-start/ground-start signaling mechanism of the PSTN?" The major ones that have survived and that are finding acceptance in implementations are MGCP, SIP, H.323, and Megaco/H.245. Another question that is often asked is, "How can we pin down the performance of a platform in the design phase if we don't know the call mix and the protocols that will be involved in the signaling process?"

First, let's address the question about the plethora of VoIP protocols. Each protocol offers the promise of integrated voice, video, and data (such as 24 x 7 Internet access) over the same set of wires, and each one represents a school of thought in signaling architecture. SIP, H.323, MGCP, and Megaco all address these requirements from a different perspective. SIP and the streamlined H.323v4 are intelligent but have had to be modified since their earlier incarnations to accommodate the stringent requirements of replacing the basic telephony service and feature support from local service providers. The term basic telephone service is key because it implies conformance with the federal and local regulations that govern the behavior of the service that we receive from the PSTN LECs. MGCP and Megaco address the same requirements, but at a much lower level; they utilize a "decomposed" gateway model, whereby the endpoints (telephones) are pretty dumb and all the intelligence is located in the media gateway controllers in the service provider's domain. As such, the number of signaling exchanges to complete a basic call with MGCP and H.248 is greater than the number in SIP and the streamlined H.323.

Things can get complicated with protocols that allow multiple modes of operation and real-time fallback to older version procedures just to accommodate interoperability with legacy equipment. H.323 has been streamlined with fastStart and tunneling, but there are at least three ways that you can initiate signaling and thus experience different performance from the same softswitch platform. Therefore, it might be inevitable to require characterization of the modes of operation and interpolate (or, better yet, test each case separately) for performance expectation in between. Similarly, the use of proxies and firewalls in a topology and ensuing requirements for protocol support might dictate the choices that an engineer has to make regarding "preferred modes of operation" for a signaling protocol.

Firewalls can be bottlenecks in the signaling path if they are distributed in the topology and are controlled by external gateways, such as the softswitch itself. The reason is simply the need to signal to the firewall and thus add "arrows" in the call flows. Arrows mean more processing and reduced switch capacity. "Smart" firewall implementations, such as firewalls that are part of or an extension of the softswitch itself, might increase performance, but the impact of such a design on flexibility and scalability must be examined carefully. Firewalls protect both the signaling and media paths, which might not (and, more often than not, will not) be the same. In other words, you might need to make a trade-off between performance and network scalability before picking the final design approach.

Distributed softswitch platforms offer enhanced scalability, but they present other complexities that could prove prohibitive from a cost perspective. The most common way to achieve platform distribution is to develop a complete switch in a single CPU and use internal signaling to distribute call processing among the all processors in the platform. This is nice and easy to say, but you must now consider that there are additional "arrows" in the path of a call that are internal to the platform and that must be accounted for in the performance and capacity analysis. Other complexities, such as failover in cases of equipment or link malfunction, could impose internal signaling requirements that will affect system capacity as well. For example, DNS operations might involve internal "trips" in the switch to a resolution resource and thus would add "arrows" to the complete call flow. Also, audits for calls, equipment, and facilities in cases of gateway restarts can consume computing resources for extended periods of time.

Finally, signaling during an active call is often unnecessary from a protocol perspective, but other signaling is required to ensure proper resource utilization, additional resource requests (such as a request to open a video channel during a voice call), and statistics gathering that all must be accounted for when setting the platform capabilities expectation and billing and OSS signaling. The latter two both require computations local to the switch and signaling to the back office from all the switch elements servicing calls.

To read the article from which this tip is excerpted, click over to InformIT. You have to register there, but the registration is free.


This was last published in December 2001

Dig Deeper on Unified Communications Resources



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.