Sphere of Influence

Sphere of Influence

In Founding Mersive, Christopher Jaynes Sought to Connect Displays to the Cloud

Quick Bio

NAME: Christopher Jaynes
TITLE: CTO and Co-Founder
COMPANY: Mersive
OVERTIME: Prior to his work at Mersive, Jaynes founded the Metaverse Lab dedicated to addressing problems in multimedia technologies and research related to video surveillance, human-computer interaction, and display technologies. In 2000, he helped found the University of Kentucky Center for Visualization and Virtual Environments, where he studied problems related to virtual reality and novel display systems.

SCN: At what age did you first develop an interest in technology?

Christopher Jaynes: At age 13, I was exposed to my first serious computer, the IBM PC XT. Originally, I was mostly interested in the science of technology more than the technology itself. I was more interested in how the computer worked than programming it. I’d spend hours at the local library poring over magazines like PC Magazine and any book I could find related to the theory of computing.

Christopher Jaynes hiking Mount Rainier. It wasn’t long before I wrote my first few successful programs, including a security/login management system that was eventually deployed at Gowen AirForce Base to protect their new desktop machines, and a game called “Adventure World” that made its way to the BBS circuit. I was hooked.

Then in high school, I worked at Hewlett-Packard as a developer and became interested in the emerging field of artificial intelligence. That led to my focus on computer vision at the University of Utah, and I later did my Ph.D. work in computer vision at the University of Massachusetts, Amherst.

SCN: When you co-founded Mersive in 2006, what were your goals for the company?

CJ: As an undergraduate at Utah, I was exposed to amazing display technology. Evans and Sutherland were right next-door, and virtual reality and ubiquitous computing seemed like they would change the world. But in 2004, high-end displays were still limited to a few corporate centers that could afford the hardware, support, and space to deploy them. While I was a professor, I began to ask, why is that? Why can’t kids in a high school have access to a beautiful high-resolution display wall? Why haven’t our conference rooms fundamentally changed since the 1990s? The goal in founding Mersive was to bring professional-quality displays to the broader commercial market. We decided we could do this by bringing software to the hardware-centric AV market for both managing displays and providing users with collaborative access.

SCN: Solstice has been a blockbuster product for Mersive. What did the development process look like? How did the software evolve?

CJ: In 2007, I was standing in the conference room of a customer who had just deployed a 27-millionpixel video wall made up of commodity projectors and powered by our first software product, Sol. He was lamenting the fact that in order to connect several laptops and share their images into his new “pixel landscape,” he’d have to spend an order of magnitude more money on hardware switchers, scalers, and wall controllers.

On the plane ride home, I began to write a proposal to the National Science Foundation that predicted the end of the video cable by 2011 (I was off by a few years). The proposal centered on the need for a new type of software (versus hardware) to allow users to transport media seamlessly from their device to displays. I met with folks at the NSF and pointed out that the huge number of displays, in our conference rooms, classrooms, airports, and even sports bars, were the largest untapped computational infrastructure in the world.

Christopher Jaynes on the North Face of Colorado’s Mount Evans. We began building the architecture and pixel transport protocols that would ultimately become Solstice. Once the core science and R&D work were complete, we began gathering user requirements, which defined the product features needed to deliver the technology to the market. We recruited a stellar team of software developers and officially launched Solstice into the market last March.

SCN: Solstice has been installed in some very high-profile settings, most of which I know you can’t talk about. Was there a single install that you absolutely couldn’t believe you were a part of?

CJ: Yes, one of our key investors is InQTel, which is the VC arm of the intelligence community, so you are right that some of the coolest, most creative uses of Solstice are in facilities that we can’t detail. However, I get the biggest kick out of seeing Solstice have impact outside of the more elite locations. For example, Hotel Zetta in San Francisco has it setup in its showcase business center, and the Wharton School of Management at University of Pennsylvania is deploying Solstice not only in student study rooms, but as part of their broader digital signage solution. Imagine sitting down between classes with a cup of coffee next to a flat panel that’s being used to display schedules and upcoming events. By opening their laptop and clicking connect, students can transform the display into a collaborative work surface for ad hoc meetings, or even just to share spring break photos. It’s these use cases that most align to my original vision of Solstice, which was to break the paradigm that displays are tied to a single-duplex, and instead can be a shared, managed infrastructure for wireless visual collaboration.

SCN: It’s hard not to feel like you’re living in the future when you use Solstice. But Solstice exists in the present. Where do you see ‘media sharing’ heading in the years to come?

CJ: I am really excited about the media sharing and wireless collaboration space in general. Clearly, this is an area ripe for rapid change and additional advancements over the next 24 months and beyond.

I think our customers have realized that by picking Solstice, they are not simply buying a product for today, but they are getting a software solution that is going to update and scale with their business needs as the space continues to take shape. For example, how these advancements in visual collaboration translate into a wide area network experience, and how the goals we now associate with telepresence get translated into a sense of shared situational awareness centered around the visual data being discussed... these are some of the things very much on my mind as I work on the Solstice roadmap. Based on my work with the VE SA standards and the value common protocols have for technological advancement, I also think that several open protocols will converge— including Intel’s WiDi, Miracast, DLNA , and (maybe) AirPlay.

When thinking about the longer-term trend, its important to realize that today, each of us is already surrounded by a rich set of visual data that lives in the cloud—videos, images, spreadsheets, 3D models, documents—a personal media-sphere that is important to your work, play, and creativity. The ultimate goal of media sharing and collaboration technologies should be to allow each person to leverage accessible shared displays to draw upon that media-sphere seamlessly as they go through their day. Pull any of these visual elements from the cloud or a local device, and in a simple way, bring it into a conversation. This is true from the simple PowerPoint presentation to several people analyzing, discussing, and editing dozens of different media sources. Collaborative meetings will involve the shared intersection of the participants’ individual media-spheres so that they can quickly gain insight, collaborate, and make decisions.

Chuck Ansbacher is the managing editor of SCN.