Immersive, 3D, or spatial audio systems—label them as you wish—are hardly new. But 2018 is beginning to feel like the year that immersive audio will go mainstream, with more than a handful of hardware and software manufacturers rolling out technologies that enable audiences to experience spatialized sound at locations outside the cinema, the home, or a pair of headphones.
One recent example, the ISM Hexadome, a travelling audiovisual installation incorporating video projection and a spatial audio system, featured sound and visual artists including musicians Brian Eno, Ben Frost, and Thom Yorke at a museum in Germany throughout the month of April. The mobile structure, created by Germany’s ZKM | Institute for Music and Acoustics, incorporates 52 Meyer Sound speakers driven by a choice of control software (one by ZKM and two from France’s IRCAM research institute). Intended by the Institute for Sound & Music (ISM) as the first step toward establishing a permanent museum recognizing immersive arts, sound, and electronic music culture, the installation is scheduled to visit locations across Europe and the U.S. throughout 2018.
Art installations, theme park attractions, and live performances are an obvious application for spatial sound technologies, but Sennheiser’s Brian Glasscock, a user-experience researcher who also works on the company’s AMBEO VR mic project and is helping guide the future of immersive audio for broadcast, believes there are everyday applications, too.
“There are a lot of problems in traditional corporate AV installations where 3D audio could improve audio experiences,” he said. “I’m thinking, for example, of conferencing. If we were to enable conference experiences with 3D audio, we could improve intelligibility, improve efficiency, reduce fatigue, and improve productivity.”
Earlier this year, Sennheiser acquired Sonic Emotion, based in Switzerland, which has developed software and hardware tools that render spatial sound and can control virtual acoustics using wave field synthesis (WFS). “We are very interested in how we can present 3D audio experiences outside of headphones,” said Glasscock, who also noted that Sennheiser introduced the AMBEO Soundbar at CES 2018. According to published reports, the soundbar unit houses 13 drivers (six woofers, five tweeters, and two up-firing speakers) to virtualize a 360-degree immersive sound environment.
Kevin O’Connor, in charge of sales and marketing for Sony’s new Sonic Surf VR platform, can also see uses for the technology beyond location-based entertainment experiences. The system, comprising multichannel active loudspeaker modules, a processor, and control software, also uses WFS to position static or moving audio objects within 3D space and will be officially introduced at InfoComm 2018.
Commercial and corporate applications might include foreign language directions in a public space or descriptions in, say, a museum, said O’Connor, who reports that Sony is already building systems for various clients, including an airline. “It’s very directional, so you could have it such that in this three-foot area you’re hearing Spanish. Right next to it, you’re hearing English. And there’s no crossover.”
A demonstration of Sonic Surf VR developed by the Advanced Technology Interactive Group at Universal Parks & Resorts was featured at SXSW 2018 in Austin, TX. “Ghostly Whisper” placed participants around a table at a Victorian-era séance, said O’Connor. “Your head is constantly turning because you feel like there’s somebody talking to you right on your shoulder.”
A second SXSW demo of Sony’s tech involved a 576-speaker set-up that enabled attendees to follow sounds as they move around them in the Acoustic Vessel Odyssey.
“Wave field synthesis loudspeaker systems are also used, for example, in live sound,” noted Veronique Larcher, Ph.D, director of Sennheiser’s AMBEO program. “When a singer or performer is moving across the stage, the loudspeaker system can deliver a congruence between what you see on stage and the sound being emitted.”
Indeed, several concert sound production technologies offering various spatial audio formats have started to attract attention this year. For example, the L-Acoustics L-ISA system, which generates what the company describes as “immersive hyperrealism,” uses software-driven, MADI-interfaced hardware processing to audibly match the positioning of performers on the stage with individual control of panning, width, depth, and elevation for 64 audio objects. Artists have been performing using L-ISA in Europe for a couple of years, but a high-profile tour by Lorde and shows by Odesza and Deadmau5 have only recently introduced it to US audiences.
The manufacturer has just announced partnerships with Avid, BlackTrax, DiGiCo, and KLANG:technologies that integrates L-ISA control into their respective products. Open Sound Control (OSC) implementation allows other external controllers and automation systems to also be used.
At ISE in Amsterdam in February, d&b audiotechnik demonstrated its d&b Soundscape system interoperating via the OSC protocol with QLab and TTA Stagetracker II. At the core of the system is the Dante-enabled DS100 Signal Engine controlled by two software modules, d&b En-Scene, handling up to 64 audio objects, and d&b En-Space, managing the acoustic environment.
Amadeus, a manufacturer of sound reinforcement systems and custom studio monitors, launched its new live sound spatialization hardware processor system, Holophonix, at Prolight + Sound 2018. The processor is based on IRCAM’s Spat engine and, similarly to IRCAM’s Panoramix software, supports a broad range of spatialization options. Holophonix’s algorithms include 2D or 3D VBAP (vector-base amplitude panning), 2D DBAP (distance-based amplitude panning), 2D and 3D Higher-Order Ambisonics, and wave field synthesis. The Holophonix system, which will reportedly start shipping before year’s end, can be custom-configured for MADI, RAVENNA, or AES67 connectivity and works with show control software and DAWs supporting OSC.
The Astro Spatial Audio system, manufactured in the Netherlands, is also object-based. At Prolight + Sound 2018, the company demonstrated its SARA II Premium Rendering Engine, which is MADI- or Dante-enabled and integrated with TTA Stagetracker II performer tracking and QLab show control software. The processor typically supports 32 audio objects, expandable to 64. Current installations include several planetariums and an art gallery.
“We are now at a tipping point where everyone is understanding and promoting the added value of immersive audio for fixed installs,” said Astro Spatial Audio director Bjorn Van Munster. “Immersive audio seems to have been the missing creative link, now bringing light, video, and audio onto an equal creative level, and offering producers and designers an entire new range of possibilities to convey messages to, attract, and blow away the audience.”