Codecs are essential to video conferencing technology; they allow participants to see and hear those on the other end in real time. Because video conferencing uses both video and voice codecs to encode and decode transmissions, both must be considered when evaluating video conferencing equipment.
Why you need to know about codecs
Before investing in video conferencing, communications and IT professionals should understand the issues around codecs and how they factor into the rest of their unified communications and IT environment. The codecs built into the equipment you select today will affect who can use the system, with whom, and on what devices they can communicate, and the other applications that can interface with the system for years to come.
What you need to know about codecs
Video conferencing systems give the illusion of talking in person because they can compress and decompress digital streams of audio and video streams in real time. The device or software that performs this function is called a codec, short for coder/decoder or compression/decompression. There are hundreds of codecs available, which can make choosing the appropriate equipment difficult. They vary according to encoding techniques, supported bit rates, audio frequency spectrum, image resolution and frame rate. Matching these specifications to conferencing needs will start communications professionals on the right path. But the main challenge is ensuring that users at every endpoint of the system can interoperate, which means that their codecs must be compatible.
Video conferencing systems must have three elements in common in order to function:
- Users on all endpoints of the conference must be able to initiate a connection by using the same call setup protocol. Session Initiation Protocol (SIP) has emerged as the enterprise standard for initiating enterprise video conferencing and other unified communications applications.
- Users must share a compatible video codec. In order to see the others, each user must be able to compress and decompress images using the same technology. The video conferencing industry is moving toward standardizing on codec H.264, which was developed to provide high-quality video at lower bandwidth over a wide range of networks and systems. The ITU-T standard for H.264 is also known as the ISO/IEC standard for MPEG-4 Advanced Video Coding; the two are jointly maintained to have identical technical content.
- Users also must have compatible voice codecs at each end of the conference to code and decode sound. The most common codecs for digitizing voice are G.711 and G.722. The emergence today of high-definition voice is causing changes in this technology, however.
While many vendors are moving toward standards and interoperability, there are still many codec options for vendors to include in their products and for customers to navigate. Products often support multiple codecs or "flavors" of the same codec, but that means maintaining multiple algorithms and code tables in the device or software memory. Switching between codecs also creates complexity, delays call setup, and increases the frequency of errors.
Proprietary codecs often require licensing fees and administrative overhead, adding another layer of cost and inconvenience. Some codecs require users to pay royalties whenever they are used. These codecs generally appear in reputable products known for reliability and scalability, but usage can be costly. Royalty-free codecs, such as the recently released SILK from Skype, which can be integrated into products at negligible cost.
The Scalable Video Codec (SVC) extension to the H.264 standard is helping to eliminate some complexity by allowing transmissions to rely on the original encoded stream, which can then be decoded with minimal software on a wide range of clients, such as PDAs and cell phones. SVC may someday eliminate the need for video conferencing gateways and multipoint control units (MCUs) to bridge multiple users.
For more background on video conferencing technology in general, visit our video conferencing tutorial.
This was first published in September 2009