Why Are Some Codecs Implemented in Software? by Phil Hippensteel

2/28/2011 4:24:20 PM
By PSN Staff

Dear Professor Phil:

I understand that some video codecs are hardware devices and some are implemented in software.  Should I care about the difference, and is there a trend relative to this difference?

Lorie, Everett, WA

Hello Lorie,
You are correct that video and audio codecs are implemented in both hardware and software.  As with other forms of technology, understanding the differences can improve your ability to select the form that you think is best.

Several historical trends have played a role in determining the current state of codecs.  Users have gradually demanded higher quality output.  Processor speeds have increased dramatically. Network bandwidth use is scrutinized more carefully.  These are some of the reasons we went through MPEG-1, MPEG-2 and currently use MPEG 4 Part 10 (H.264 AVC).  These earlier codecs were generally implemented in hardware because the processors of the day simply weren’t capable of handling the computation in real time. It’s different today.  Steve Jones, of Broadcast International explained this to me.  In software implementations, such as on Intel Zion platforms, application calls to routines can be done without the coordination and overhead of the operating system.  An example of such a call is the one to the critical DCT algorithm used as part of the compression process.  In such architectures, complex processes can be virtualized and spread across multiple processors, dramatically cutting the time needed to accomplish the tasks.

One significant advantage of having the codecs in software is the ability to have the codec very close to the user who will view the program.  This is particularly important to content delivery networks.  For this reason, there is a clear trend toward implementing the codec in software whenever it is possible.

Dr. Phil Hippensteel is an Assistant Professor of Information Systems at Penn State University.


Share This Post