Everyday Immersion

Quick Bio

Bob Bonniol, MODE Studios

(Image credit: Bob Bonniol)

Name: Bob Bonniol

Position: Partner/Chief Creative Officer

Company: MODE Studios

Overtime: Bonniol has an active practice as a coach and advisor to creative leaders. In addition, he is an avid sailor, and cannot be separated from the ocean for long. He loves interesting, deep questions of all kinds, which fosters his obsession with learning new things.

SCN: Tell us about why you launched MODE Studios.

BOB BONNIOL: I and Colleen, my wife and partner, were working in other parts of the entertainment industry. Colleen was working as a best boy on feature films, and I was a lighting supervisor for Disney Theatrical. We both decided we were interested in learning about CGI—this was back in 1997, and films like Jurassic Park had just blown everybody’s minds.

So we took a 14-week class in Softimage 3D, which was the software used for visual effects on Jurassic Park. We finished the course and decided to form the company, at that point with the idea of making animated children’s television series. We were pursuing that and had made three pilots when we got a call from a friend—the late, great lighting designer Rick Belzer [whose credits include Broadway shows including Cats and touring exhibits such as Titanic: The Artifact Exhibition]—who was designing a tribute tour for the Tejana singer Selena. He said that they had a big video system, lots of projectors and playback, but there was no content … no design. And he said he heard we were now “video designers.” Ha!

We were willing to try. This was in 1998, so the show was using 5,000-lumen LCD projectors, Doremi playback decks, all controlled with Dataton Trax. We went ahead and figured that out. It was familiar from a system point of view to our combined knowledge of lighting systems. And we now knew how to make content, so we started creating the scenic song backdrops. That show led to another, and another. Eventually, by around 2005, we were creating big, video-driven shows for concert tours, opera, theater, and broadcast. Around 2010, we also started getting heavily involved in designing media-driven architectural projects. At this point, most of our work is with large-scale entertainment, brand activations, and interactive architectural projects.

SCN: What is the biggest difference in terms of the audience today versus the audience of the past?

BB: The biggest difference between expectations of the contemporary audience and previous ones is the idea of having no barriers, either of form or of experience. It used to be that you had a stage and theater full of seats, or a thing to be observed and a separate observer. Now, those walls are removed.

Today’s audience wants very much to be in the show. They want to be part of the picture, and they also want your picture to be part of theirs. Everybody craves to create impressions of their life on a social canvas. So now we are providing backdrops for their lives as much as we are creating for our own purposes. 

Also, today’s audience expects to have agency. They want to affect their environments, their experience. To meet these expectations, you need to create immersion and interaction.

SCN: Tell us your definition of merged reality.

BB: The idea of merged reality [MR] goes all the way back to Robert Edmond Jones saying in 1919 that in the combination of the live actor and the motion picture, a whole new art form existed. When we started to combine cinema with on-stage actions, we were dipping our toe into merged realities. We had what was real, and we also had this awesome “mind’s eye” that could impart dreams, emotions, or just more information.

Today we are witnessing technologies like augmented reality [AR] and virtual reality [VR] creating amazing new channels for experience. Now take these things, like we did cinema, and combine them with the live context. Boom. Now we are really delving into merged realities, and they become powerfully compelling. This is now starting to be experimented with by artists, by brand creatives, by filmmakers, and by architects.

SCN: Where does merged reality have the greatest potential in the professional audiovisual world?

BB: It is important to understand how ubiquitous it will be. The biggest thing that will drive the development of merged reality will be when augmented reality reduces its emphasis on mobile devices and instead moves into wearables. I think in 24 to 36 months we will see stylish, subtle glasses that will add the AR layer directly to your line of sight. In 10 years there will be an augmented reality layer alongside and on everything: public spaces, retail spaces, entertainment productions, installations, museums, everything.

If we just apply the lens of needing to play back and coordinate all that content alongside what is happening in the “real” world, it means a whole new layer of show control, playback, routing, and also what will ultimately be AI-powered personalization and articulation of programming.

SCN: How do you integrate merged reality into things like brand activations?

BB: From a technical standpoint, MR is great at making experiences feel and appear bigger and deeper. AR can work to extend what lighting can do, what scenery can do, what the sense of space is. The best merged reality experiences in brand activations are going to lean in to making those activations really personal. With several simple queries to a user, the individual experiences can really be refined and tailored by this potential for AR to add on to live experience, fill gaps, and extend things.

SCN: How can clients use big data to measure the effectiveness of their activations, especially those using merged reality?

BB: Because we can measure things like where people are looking and what they engage with more deeply when we use merged reality toolsets, we can start to give brands a view into the same metrics they enjoy in digital marketing. Merged reality is empowered by tools like spatial scanning with LIDAR, depth sensing for interactivity, and participant interconnection. All of this serves to measure the experience at the same time it empowers it.

I am fond of saying that when you can measure things, you can move them. Brands will start using this ability to measure to iterate and improve experiences in real time, determine what is really compelling to their audience, and derive direct connections between experience and transaction.

SCN: How do you use artificial intelligence for content management?

BB: We are currently using AI in two ways. We have built proprietary systems that let machine learning help us metatag content. This allows us to find useful content very quickly from a big brand client’s own deep reserves, organizing it seamlessly for us to then deploy.

The other way we are using AI is in content sequencing. In 2010, we created software called the MODE Matrix for an installation at Microsoft that would sequence and effect content in real time based on interactive inputs. That was the seed of development that later manifested as our Interactive Content Engine that we recently deployed at GM World to create a steady stream of new sequences for the 17 screens in that space. We are using this AI-driven technology to offer clients the ability to ensure that installations are constantly evolving and changing autonomously.

SCN: What is the largest challenge you face when working with AV integration firms and how do you solve it?

BB: We treasure our relationships with integrators. We place a large value on these partners being able to stretch the idea of “the box” and be comfortable with innovation. Often we are taking very familiar systems designs and adding or embedding very new layers on it. It requires patience, good humor, flexibility, and more than a little courage.

We promote this kind of atmosphere by making sure our integration partners clearly understand what success looks like before we begin, and then involving them and communicating with them abundantly as decisions are made and plans formulated. We find that when we create the alignment before we jump off the proverbial cliff, the experience can be much saner—and even fun.

SCN: Anything you’d like to add?

BB: The three big takeaways from this lovely chat are AR, AI, and scanning/sensing. For manufacturers, this means a huge new market for devices that utilize these things to create immersive environments everywhere. For service providers, it means grabbing these tools and meeting the expectations of clients who are going to demand all of this. For creatives, it means that we have incredible new tools to create really deep experiences with audiences. This is a big responsibility—it’s a very powerful new channel. Despite the overt technicality of it, it should be used to create deeper human connection.   

Megan A. Dutta

Megan A. Dutta is a pro AV industry journalist, and the former content director for Systems Contractor News (SCN) and Digital Signage Magazine, both Future U.S. publications. Dutta previously served as the marketing communications manager at Peerless-AV, where she led the company’s marketing and communications department. Dutta is the recipient of AVIXA's 2017 Young AV Professional Award and Women in Consumer Technology's 2018 Woman to Watch Award. Dutta is co-founder of Women of Digital Signage, an organization designed to provide a pathway to promote networking, mentoring, and personal growth.