Latency, the Myths and Realities: Part 1 -

Latency, the Myths and Realities: Part 1

Publish date:
Social count:

As control rooms become increasingly agile, potential features increase, as do new technical considerations. In the days of analog video and audio, transmission of a wired signal was as fast as electromagnetic waves on a copper wire. There are variables involved, but in many cases, that equates to more than half the speed of light. In nearly all cases, the transmission is perceptibly instantaneous. Processing in the analog domain is essentially just as fast, only serving to slightly lengthen the signal path.

The digital age changed that speed of transmission, and for the decades since, the world of digital audio and video has been dealing with latency. Simply converting an analog audio signal to digital values, or capturing an image in the digital domain, includes latency. Converting them back to analog takes time. Processing those signals takes time as well. To put it in perspective, each of those takes a number of cycles of a digital processing chip. Much like computer cycles in a motherboard, the individual conversion chips, or more involved DSP chips that do this work use up clock cycles to make these things happen. That latency is also only accounting for the signal once it is inside a single piece of hardware perhaps. There is additional latency in getting digital signals from “box” to “box.” Once the signal is on the wire, the transmission speed can be similar to that analog signal on copper. In the case of Ethernet-based transmission, the digital or analog signal data must be packetized to function properly, and that packetization adds latency.

Latency is unavoidable. There is no digital system today that can perform without latency. Manufacturers work diligently to push latency to be as short as possible, and in some cases it is nearly nonexistent, but there is no way with current technology to have a 100% latency free digital system. The challenge is to manage the latency and be mindful of when and where it matters the most.

Justin Kennington, director of strategic and technical marketing, AptoVision, concurs, but offers his own take on the topic. "Of course latency is unavoidable – not only in digital systems but also the entire universe,” Kennington noted. "Light only traverses one foot per nanosecond. But in a video system, there is a huge break between the performance of a matrix switch and that of a compressed AV over IP system. Matrix switch latency is dominated by a few short signal pipelines and signal propagation (at nearly the speed of light). This sums to only a few microseconds. The average AV over IP system experiences all of these same delays plus the delay needed to encode and decode the compressed image. This takes dozens or hundreds of milliseconds – several frames of video. The real-world performance difference is striking.” Kennington explained that new technologies, like SDVoE, for example, offer ways to avoid this heavy compression, and use clock resynchronization to deliver matrix-switch latency (microseconds) on a flexible IP network. "There is no need for compromise,” he said.

So, while someone watching Netflix at home may have no serious concern about latency, in professional applications, latency not only matters, it is critical. Consider an IMAG (pronounced eye-mag) application of a presenter on the stage. If the images displayed for IMAG are noticeably delayed from the live presenter, it can be very distracting. But even more so, with professional control rooms, time precision may be so critical as to even be a matter of safety. Off-shore oil rigs have often used remote video monitoring for safety and operational control from on-shore control rooms. To ensure minimal latency, the traditional solution had been analog CCTV. Supporting these legacy systems as they age has become cost prohibitive, and the conversion to digital is on. Having low latency in these a setting such as this, other industrial applications, or biomedical applications is a must.

There is no set definition of what low latency is, however. It is determined by a bigger question of what the application is and how critical the perception of “instantaneous” is. The human eye and brain are better at detecting very small discrepancies than we might believe. Typically, below 100ms latency is functional for a person viewing an image against a real-time reference. In the case of machines interacting with video in industrial and medical installations, latency requirements may be as low as 10ms or 1ms. Many solutions aim to be less than the time it takes to render a frame. In 60fps applications, that means 16.7ms or less.

All processing creates latency; compression and decompression, tiling, edge blending, scaling, etc. In addition, capturing video digitally adds latency. Even once in the digital domain, packetizing the stream for distribution and routing over IP adds latency. And while manufacturer’s specifications may show an impressively — remarkably — low latency number for a given piece of hardware, the network will add latency with each switch hop, and every piece of processing hardware adds its latency in turn.

Paul Zielie, CTS-D,I, manager of Enterprise Solutions for Harman Professional, explained latency this way in the Tech Manager’s Guide to Video Streaming: "Tolerance for delay or latency is perhaps the least understood critical attribute and is among the hardest to quantify. There are large differences in latency tolerance for different use cases ranging from almost no latency to several seconds (or minutes). Latency considerations are typically only a concern in real time streaming applications, especially those that involve interaction with technology or between people on opposite sides of a streaming link. There are many causes of delay along a streaming signal path so latency has to be treated holistically."

In coda, one of the biggest myths about latency is that it is should be understood holistically. Tech managers should keep in mind that latency all adds up, and, while" nearly” nonexistent. it is impossible to avoid. Therefore, planning for latency based on the application means being proactive and creative rather than reactive or stymied by delay.

Take a deeper dive into application-specific latency tolerances and specs in the free ebook: Tech Manager’s Guide to Video Streaming and RGB Spectrum's Technology Tutorials and Video Resources. Stay tuned for Part 2 of this special feature on latency.

Justin O'Connor, AV Technology magazine's Technical Advisor, has spent nearly 20 years as a product manager, bringing many hit products to the professional audio world. Over that time he has served the proAV, professional sound reinforcement, permanent install, and music instrument retail markets with passion. He earned his Bachelor’s degree in Music Engineering Technology from the Frost School of Music at The University of Miami.Follow him at @JOCAudioPro. 


13 Mission-Critical Myths promo image

13 Mission-Critical Myths

That rarest of rare-breed applications, the command and control center is the progenitor and propagator of all great interconnection between audiovisual sources and displays.

Image placeholder title

Examining Metrics, Part 1

Even though digital signage in the retail space is not a mature market, we’ve already seen the establishment of some assumptions among end users.

Audio Demo Room Tours, Part 1 promo image

Audio Demo Room Tours, Part 1

Video is not the only media to be displayed in all shapes and sizes at InfoComm this year. Audio is also taking on a whole bunch of new form factors, and not just in terms of cabinet design. A tour of the Audio Demo Rooms provides a mind-bending array of new options for carrying voice and music to new heights and lengths.

Image placeholder title

3D Reality...

What The New Video Dimension Means For AV This month, Scientific visualization has become one of projectiondesign’s most important markets. 150 million pairs of 3D glasses are going to be given away in time for the SuperBowl. The