How a Truly Hardware-Free World is Still Far Away

How a Truly Hardware-Free World is Still Far Away

Ultimately, video over IP is about picking the right I/O—how we connect to the network and how we get off it. “The format and the transport protocol are meaningless once the video is on the network, said Bob Sharp, SVSi. “It’s just packets of data.” The world of professional AV has seen something of a sea change over recent years with the accelerating adoption of IP networks and cloud-based services. But while the AV and IT industries continue to converge, and the Internet of Things looms large on the horizon, it will likely be many years before we see video free itself from its hardware bonds.

“It’s headed to an all-IP, all-ethernet world,” predicted Justin Kennington, product line manager, DigitalMedia, at Crestron Electronics. Although Kennington has led the DigitalMedia product line for the past six years, his background is in IT. “I came from Google and the data center world before this. So there’s no bigger believer than me that one day, ethernet will do everything. I just don’t think that we’re there yet.”

“The simple answer is that everything will migrate to the ethernet to network in some form or another,” agreed Bob Sharp, director of international sales for SVSi, based in Alabama. “Any non-IP-based AV systems will go by the wayside.”

Video over IP has the potential to be disruptive; it’s just a question of the time frame. “I truly believe that when SVSi introduced our audio and video over IP eight years ago at InfoComm, we introduced a disruptive technology to this industry,” continued Sharp. “The problem was, we weren’t big enough to disrupt anything.” That may have changed: SVSi was acquired by Harman International immediately prior to InfoComm 2015 and is now marketed under the AMX by Harman brand.

“You would probably be foolhardy not to come to the conclusion that a great deal of what a corporation or hotel might use may well come from cloud-orientated sources. But I think that time period is still relatively lengthy,” commented Mike Allan, CTO at Exterity in Scotland. The company, which focuses on enterprise IPTV and the distribution of broadcast-quality digital TV and video over IP networks, further expanded into the U.S. market recently, adding a West Coast office.

“But there is absolutely a place for hardware for the foreseeable future,” continued Allan, especially where high quality is demanded. “People are trying to push the boundaries in terms of compression levels in order to fit more stuff down the same pipes. But unfortunately, the amount of computations that are required to gain these compressions seem to be going up exponentially.”

Moore’s Law still applies to chips, but it is also applicable on the content side: 4K is here, 8K is coming, and there are already content creators dabbling with 16K. And while HEVC on an x86 CPU may support sufficient quality for some applications, “If you get yourself a dedicated SOC to do that, you can probably do it at higher quality than with these general purpose chips, and really produce the sort of result that a broadcast company is looking for,” Allan said.

“Today, a normal, 4K HDMI signal is about nine Gigabits per second [Gbps],” Kennington pointed out. “You can shove that over a 10 Gbps ethernet link—but your office or your home is not a 10 gigabit network; certainly your ISP connection is nothing like that.”

Compression codecs such as H.264 and H.265 are highly efficient, he continued. “You can take that nine Gbps 4K signal on HDMI and knock that down to 30 or 40 Mbps with H.265, with very little loss in quality. But the penalty that you pay for that is latency, in the time it takes to do that encoding.”

Live intake encoders may need 150 or 200 milliseconds to do their job. “If I wanted to give a Powerpoint presentation to a salesforce across the country watching on their mobile phones, they don’t have the reference of the live event so the latency doesn’t affect them at all,” said Kennington. “But in commercial AV applications, where I’ve got a laptop and I want to use its video on the display on the wall, 200 milliseconds is a huge delay for moving a mouse around and seeing the pointer move. It’s a really frustrating user experience.”

There is also the issue of reliability to consider, observed Sharp. “There is a security blanket in the chunk of hardware sitting in the corner that is dedicated to teleconferencing.” Even the big names in cloud-based teleconferencing can suffer from reliability problems, something he knows from personal experience, he said: “Half the time it doesn’t work.”

There will always be mission-critical applications, such as military command and control, or Las Vegas bookmakers, Sharp suggested. “They can’t tolerate poor connections. Whether we’re five or 10 years down the road, I don’t see it going totally software- or cloud-based.”

European broadcasters such as Sky and Canal+ may be investing in internet-based services, said Allan, but they still rely on hardware. “These guys have invested in satellites and all the MPEG transport stream stuff to carry this content. This is an investment they can’t afford to throw away overnight.”

Ultimately, said Sharp, “We have to pick the right I/O to the network that is appropriate for what we are trying to do; it’s about how we connect to it and get off it.” The format and the transport protocol are meaningless once the video is on the network, he said. “It’s just packets of data.”

For the AV industry in general, and system integrators in particular, the IT department’s existing infrastructure may be a godsend. “Huddle spaces, small conference rooms, cubicles—I guarantee they already have an ethernet drop. If we can find ways to leverage that for AV, then suddenly the AV integrator has so much greater reach into that facility,” said Kennington.

“One of our core strategies at Crestron is to move ever closer with our technology to the IT side of the world. Why struggle with our tens of millions of dollars in R&D budget when instead I can leverage Cisco’s hundreds of millions or billions of dollars in R&D? Give me what they’re inventing and let me repurpose it for AV.”

Steve Harvey (sharvey.prosound@gmail.com) has been west coast editor for Pro Sound News since 2000 and also contributes to TV Technology, Pro Audio Review, and other NewBay titles. He has over 30 years of hands-on experience with a wide range of audio production technologies.

How are video processing devices evolving to meet a less hardware-centric systems integration model?

Video encoders enable organizations to reduce hardware costs while increasing the number of feeds that can be ingested and delivered to multiple devices, including smartphones, tablets, and smart TVs. New low latency encoders seamlessly integrate with many of today’s industry standard IPTV equipment. They provide systems integrators with outright control over all content streamed over the network, as well as ensure the easy management of end-point delivery. With this, organizations can deploy mobile strategies that extend the reach of video feeds in real-time to remote workers and business travelers.

Exterity’s latest AvediaStream Encoder streams outside conventional LAN networks for delivery over WAN, Wi-Fi, and the internet to facilitate video distribution to mobile and desktop clients. Streaming protocols HLS and RTSP allow the encoder to capture and deliver high quality video content via content delivery network environments, or to stream directly to a wide range of end-point devices. This answers the industry demand for a more robust workflow, more processing power, and solutions that can work across the board.
—Colin Farquhar, CEO, Exterity

Video signal distribution technologies, which have historically relied on proprietary switching hardware, are now poised to be IP centric with standards like AVB and chipset technologies such as AptoVision’s BlueRiver NT. Integrators can focus on what all can be done with audio and video now that it’s on an IP network instead of focusing most of their energy trying to figure out how to actually move audio and video around a facility with complex, non-interoperable pieces of equipment.
—Kamran Ahmed, CEO and co-founder, Aptovision

As video products evolve, denser functionality and high-performance design is packed into more all-in-one devices to provide simplified, integrator-friendly solutions. Video processing systems that required dedicated boxes or main frames have now evolved to smaller transport solutions such as HDBaseT extenders with built in scaling. From the integration model perspective, less hardware is required with these advances in video processing. Another significant evolution is in the playback realm, where hardware media playback is on a downturn as streaming video with comprehensive codecs now handles more and more content.
—Keith Frey, Sr., product manager, PureLink

The “internet of things” has arrived. We can now operate appliances, televisions, heating and air conditioning, and even lamps with nothing more than our smartphones and tablets. Launch an app, select an icon, and you’re in control. If we’re already doing this at home, we should be able to control an AV system just as easily—and we can, thanks to a new generation of app-based control systems that eliminate the need for costly and proprietary hardware and software. Now, we can store command macros and icons in “the cloud,” ready for programming anywhere, at any time, on any mobile device.
—Clint Hoffman, vice president of marketing, Kramer Electronics US

An increasing number of AV systems are now IP-based, which opens up a multitude of new opportunities. These systems are significantly more flexible than conventional one box video processing systems. Using IP based technologies means the users are no longer required to buy a large expensive video processor device, but can instead buy just the number of required IP based encoders and decoders and manage and control switching and scaling of video through these devices from a low cost IP based controller appliance. Some of these systems are also standards based, meaning they don’t even always require separate hardware to encode or decode video—software running on standard PC hardware can decode the video for display on a PC, or a video stream from the internet can easily be integrated into an IP based video processing system.
—Erik Indresøvde, AV and digital signage global product manager, Black Box

For Gefen, it has been much easier to add IP management and control to AV products, even those that are not using IP to actually distribute signals. Gefen’s new Syner-G software, available on all new products, offers a comprehensive set of tools to manage a variety of video and audio products across a network. These tools allow easy configuration, firmware updates, signal monitoring, EDID management, and even IP address management, enabling AV integrators to be confident about the integrity of the distribution system at all times. Gefen’s Syner-G-compatible products contain a discovery “beacon” that allows the software’s Discovery tool to find and connect to the device, even if it is set to an incompatible address, and even if it shares an address with other devices, which would normally incapacitate it from communicating. This is a tremendous timesaver, especially when configuring a large number of video over IP devices into a virtual video matrix.
—Tony Dowzall, president, Gefen

As the future of technology develops, more IP-enabled smart hardware is becoming available. With increased network connectivity built into AV products, integrators are able to leverage software control for system integration. Our DisplayNet product line is a new concept for AV distribution that uses 10GbE ethernet technology to distribute uncompressed AV signals. DisplayNet can deliver GbE LAN to each system end-point and includes powerful control software and API. Consequently, many aspects of system functionality can be easily controlled via software. DisplayNet isn’t just new technology; it’s a new paradigm for AV system integration. Hardware is not going away, but it is getting smarter.
—Joseph Barbier, marketing manager, DVIGear

We are moving away from dedicated hardware devices for signal processing, with this functionality migrating into other parts of the system. A good example is MediorNet. With MediorNet, many real-time processing functions (quad splits, audio embedder/de-embedder, frame store, frame sync, and sample rate conversion) are provided as part of the signal transport network. This provides greater value to our customers and more room in their racks for other, perhaps revenue-generating, equipment.
—Christian Diehl, product manager, Riedel

In the pro AV space, we’re not quite there yet. That’s certainly the transition that’s just getting started, and in a decade, or five years, maybe, we’ll be doing everything over IP, exclusively. But today there’s too many holes in the technology.

The technology is available, but in the pro AV space, we’re very sensitive to latency. Today, everything we do with video over IP involves some kind of codec, like H.264 or JPEG-2000, and those codecs all have a latency penalty of typically over 100 milliseconds. So we’re in a hybrid world, where the integrator needs to have tools for zero latency, zero compression, and tools for high compression efficiency and willingness to pay the latency penalty when ethernet is the distribution mechanism that makes sense, like when you want to go super long distances, or to hundreds of thousands of endpoints.
—Justin Kennington, technology manager, DigitalMedia and streaming solutions, Crestron

For a long time, Analog Way video processing devices were mainly based on hardware architectures controlled by little software. Today, even if hardware remains the core of our products to achieve low latency and high-quality processing, the software aspect has dramatically increased. The first reason is that our systems integrate much more complex features than before: our graphical interfaces (GUI) provide operators with powerful and intuitive controls. A second reason is that video processors are now part of wide heterogeneous interconnected environments: they must be able to communicate with other systems, either to control them or to be controlled by them. For our Midra and LiveCore series of AV mixers, Analog Way developed third-party protocols, as well as drivers for automation needs. Along these lines, video processors now tend to communicate more and more with their video sources and their output peripheral for configuration purposes, (for example using the EDIDs) but also for video streaming using IPTV technologies.
—Bruno Bauprey, product manager, Analog Way

In more traditional lossy video compression schemes, transient artifacts are acceptable as video changes so rapidly, but in the KVM space, we have to consider crisp pixel differentiation, which is true to the original color space and whose encoding schemes create minimal latency. Often, this type of encoding takes a multi-layer approach, adapting in real time to what happens on screen. To achieve this fast encode speed, manufacturers turn to bespoke environments to run the codec. We are starting to see more generic PC platforms displaying the capability of running their OS, applications and communication layers at the same time as encoding on screen video to a live KVM codec. When placed into a PC or cloud platform, the obvious distribution scheme has to be IP.
—John Halksworth, senior product manager, Adder Technology

At Extron, we do see more software applications and a desire to leverage greater network functionality; however, we still see hardware as an integral role in making that happen. For example, more conference rooms are utilizing software conferencing codecs. Previously, there was no way to easily integrate those codecs with pro AV mics or high-end cameras and deliver high-quality results. With the launch of our MediaPort 200, a HDMI and audio to USB scaling bridge, integrators now have a hardware solution that bridges that gap and provides a solution to create an enterprise-class video conferencing experience. When combined with the Extron CCI Pro 700, a user interface for conferencing and collaboration, you now have a complete end-to-end conference system.
—Joe da Silva, director of product marketing for Extron

The migration to IT environments has been going on for a while. It is finally at a tipping point, and many video processing devices are focusing on converting AV signals to an IP stream. Once migration is completed, many of the low- to mid-range AV systems will be based on the IT rather than AV infrastructure. The high-end AV systems will continue to rely on conventional AV system devices.
—Jack Gershfeld, president, Altinex

The evolution of HD AV signal distribution has led to an environment where the hardware can be specifically tailored to the job at hand with HD over IP. Meaning that if there is a need for a 3 input, 11 output application, an integrator can utilize an HD over IP solution. This type of system allows exact system scalability versus the standard HDMI/HDBaseT matrices that have been used as the gold standard in the past. With this system, a source can be added at any time. The same holds true for later addition of displays.
—Hal Truax, vice president of sales and marketing, Wyrestorm

Steve Harvey (sharvey.prosound@gmail.com) is editor-at-large for Pro Sound News and also contributes to TV Technology, MIX, and other Future titles. He has worked in the pro audio industry since November 1980.