Video Processing in a Digital Age

Video Processing in a Digital Age

Video processing today has evolved beyond hulking processors with expansive card slots. With PCs becoming more powerful and IP integration taking hold, the old ways of video processing are not necessary for most applications. There will always be those specific applications that require the traditional video processor operational flow, but typical, everyday video processing requirements seem to happen more often than those specialized situations.

  • Let's review the current video processing schema. For a video wall, there will typically be a video processor that receives all the inputs to be windowed within the wall. The outputs then feed the displays or banks of displays, depending on how the user wants them configured. The PIP and source allocation is then set up within the processor's configuration. Various presets or transition elements can then be configured and controlled. This is how it's been done for years, but it requires careful planning, often-hefty infrastructure requirements, and expensive hardware.


The video processing solutions emerging today are looking to shift that workflow to make the systems less of a burden on the infrastructure and the client's wallet. The systems aren't necessarily getting simpler, they are just shifting from a hardware-based solution to a software-dependent solution, making knowledge of programming, networking, and digital compression algorithms more important.


IP solutions, such as those from SVSi/AMX, allow the user to share video processing resources over the IP network. This makes the design simple: plug everything into the IP network and ensure the components can all talk to each other, then assign to the video processor and route to the outputs. If the needs or applications change in the future, then there is no rewiring. Simply adjust the programming and IP routing.


Another way to avoid using a video processor is to house it all virtually. For example, signage players from Spinetix can be placed at each display and connected to a network switch. The video layout and matrix is built virtually from within the management software. The software then sends each piece of the image to the signage players to be assembled. This can be a very useful solution, depending on the complexity of the video wall or processing requirements.


This trend towards virtualized and networked video processing is one that I hope continues to evolve as we move closer toward a fully connected environment. The hulking processors and routers begin to fade away as centralized servers and network switches take their place. Video streams become bits of data that can be easily routed and manipulated through computer code. This opens up creative possibilities when designing systems, and allows designers to create unique systems that are efficient and intuitive to the end user experience.


Jonathan Owens is a multi-disciplinary Consultant at Shen Milsom & Wilke, LLC. Owens has more than 10 years of experience in audio and audiovisual design, engineering, acoustics for a wide variety of projects including corporate, commercial, fine arts performance centers, entertainment facilities, higher education, K-12 schools, and healthcare facilities. Owens is also a professional recording/mixing engineer and sound designer.