In this continuing bi-weekly newsletter, The Agile Control Room, we explore the challenges, products, and topics pertinent to today’s mission-critical spaces. As control rooms become increasingly agile, the features and possibilities expand, and new technical considerations arise.

In the last issue, we examined what latency is, why it occurs, and why a digital system cannot be completely “free" of latency. In part 2, we will explore the impact of latency and use cases in which the effects of latency are more and less of a concern.

Quantifiable Impact

Latency is measurable. It has a value, in time, that can be quantified. In many cases, the impact of latency comes down to perception. When humans interact with video and audio content, it is their perception of time that becomes the critical metric for latency. This perception is often based on comparison. If the person or persons consuming multimedia content have no base to compare to, the latency is often a negligible factor. Buffers for streaming services can be frustrating as we wait for our content to start, but once it is continuously playing, the time it takes for that stream to have traveled the Internet is not critical, or even perceptible. In a video conference, however, two-way latency that occurs can be distracting or even detrimental to free-flowing, productive communication.

Harry Ostaffe, Black Box’s director of Product Marketing, Control Room Solutions, concurs. He said that latency can be tolerable for video-only applications such as IP security cameras or status monitors, and greater encoding compression can be used. "User-perceptible latency is not acceptable for mission-critical applications where user interaction is required through input devices such as keyboards, mice, and HMI touch screens,” Ostaffe explained. "Assuming critical computing systems are located in a secure server room, the response seen by users through a KVM system must be the same as if the computer is under the desk. In addition, the latency of switching time between different computing or video sources should also be considered." He added that Black Box offers solutions from as low as single-frame switching for the most time-critical applications.

Base of Reference

The perception of latency for viewers is contingent on a base of reference. In the case of video conferencing, our base of reference comes from the way we expect face to face conversation to work and from the time it takes for someone to respond to something we have said. In the case of IMAG (Image Magnification), we can compare a person moving and speaking on a stage perhaps with that of their image being displayed or projected in the same space. Again, if the latency is high, it can become so distracting that we lose the ability to consume the content of the speech or presentation.

In Part 1, we focused on capture, digitization, and network packetizing as sources of latency. Other sources such as compression, scaling, and processing such as tiling and blending are also sources of latency. Whether the signal is audio or video, processing a digital signal takes computation cycles. Processing chips have, or DSP’s, have clock rates that dictate how quickly they can read and evaluate binary data. Any given node within a chip reads one bit per cycle. Processing algorithms that are programmed into these chips read and perform mathematical operations on sets of binary data and then send the bits back out having processed them. All of this creates processing latency. Chip latencies are typically extremely short, but are also not practically relevant. What is important is the specification for a piece of hardware, from end to end, and based on what processing is taking place.


Compression is one of the most common sources of processing latency for video hardware. The cycles required to encode and decode the compression take time and therefore add latency. The trade-off is in bandwidth. Compressing the signal can be either helpful or critical in managing the network infrastructure and could potentially save a great deal of money in the Ethernet infrastructure required. Uncompressed 4k60 video requires over 10GBps. There are many switches on the market that can support this, but that support comes at higher costs. Understanding when latency matters and when it doesn’t can help maximize cost efficiency. With video streams that need the lowest latency, building up the network to accommodate uncompressed video means he latency can be kept to a minimum. Conversely, when latency isn’t as big of a concern, running compressed signals over smaller bandwidth can save in network infrastructure costs.

"In a mission-critical AV environment, access to low-latency video is vital,” said David Minnix, co-founder of CineMassive. "Whether you're monitoring a crisis in an emergency operations center or a viewing a drone feed in a tactical operations center, you need to know that what you're seeing on your video wall is happening in real-time. One way to achieve this is through dedicated hardware codecs that provide ultra-low latency for streaming video over LAN or WAN.”

Tighter Limits on Latency

In addition to human perception as a determinant of the impact of latency, interaction with machines and humans or between machines can impose very tight limits on latency. As mentioned in Part 1, control rooms for industrial, security, or biomedical applications require extremely low latency in order to be effective. In these applications, minimal processing and large enough bandwidth to run uncompressed video become essential.

With digital video, there will always be some latency. Understanding the base of comparison, and whether the video is simply being consumed or if there is a response, allows IT and AV decision-makers and integrators to accommodate the best solution for balancing the critical needs and the infrastructure cost.

Justin O'Connor, AV Technology magazine's Technical Advisor, has spent nearly 20 years as a product manager, bringing many hit products to the professional audio world. Over that time he has served the proAV, professional sound reinforcement, permanent install, and music instrument retail markets with passion. He earned his Bachelor’s degree in Music Engineering Technology from the Frost School of Music at The University of Miami. Follow him at @JOCAudioPro.

If you missed Latency: the Myths & Realities, Part 1, check it out here. Subscribe to The Agile Control Room newsletter here.