Beyond Pixels: Real-Time Tech From RealMotion

Beyond Pixels: Real-Time Tech From RealMotion

Dalkian with a rack of RealMotion's commercial products. Name: Sevan Dalkian
Position: CTO
Company: RealMotion
Overtime: As a child, Dalkian enjoyed dissecting and reassembling his gadgets, which led to his interest in gaming and real-time technologies.

SCN: When did you realize you were destined for a career in technology?
Sevan Dalkian: Since I was a kid, I was always interested in science, how things are built, and how they function. I would often disassemble devices in the house and try to reconstruct them afterwards, sometimes with more success than others. My parents were very patient, and I’m glad they pushed me to always learn and keep asking questions. But the breakthrough happened when they bought my first computer on my 14th birthday. I immediately fell in love with the computing device that appeared to have infinite possibilities; I literally became obsessed with it. I can’t remember how many times I broke it, and then fixed it. For the longest time, I was certain that the computer store near my house would close down because of my uncountable visits for support and warranty.

Gaming pushed me to tweak and optimize by overclocking the hardware to gain the maximum performance available out of the box. At the time, we used to have high refresh rate screens and it was important that the system would keep up, so you could have an edge over your buddies playing against you. At the same time, I also got very interested in graphics software like Photoshop and Bryce3D—the latter of which is a raytracing tool that provided very realistic simulations of landscapes and skies. I loved the idea of being able to set a few variables and adjust settings to create realistic digital worlds that would be similar to what I had imagined. The era of real-time digital computing using programmable graphics processors came shortly afterwards, while I was studying computer engineering, and this is when things got very serious. I had found my calling and all of my energy went into it. As a developer, I was able to do the previous off-line rendering at real-time rates. The challenge was to optimize code so that it “looked” almost the same as the offline render, but could be interactive at more than around 60 images per second.

I never actually consciously decided to make a career in technology. It just happened naturally as the years passed. When I was young, I thought nothing was impossible and had the support of my friends and family, which really helped me work harder and become really good at what I did. I can say things became clear when I excelled in my computer graphics classes at university. That’s when I realized I had a contagious passion for real-time graphics and rendering.


SCN: After you received your Bachelor’s degree in computer engineering from the École Polytechnique de Montréal, you dove directly into a next-generation 3D graphic software developer/engineer role at EA. At that time, what was most compelling about video games for you?
SD: It’s funny because, at that time, I didn’t want to work for a big corporation. I wanted to be on my own and write 3D software that would serve a purpose in the industry. Then, I realized gaining experience in the industry by working with highly talented individuals on big projects could be a good strategy to get started.

I received a job offer from EA while I was still in school. A few of my friends were already working there, so I decided to give it a try and I’m glad I did. Some people don’t realize it, but video games are extremely complex, and optimized systems require an enormous amount of work. Almost all of what you can possibly learn in computer science or engineering can be repurposed in game engines from high performance parallel computing, real-time physics simulations, realistic lighting equations, multi-threading, timing considerations, AI algorithms, and network. Apparently, the latest games have a similar level of complexity to sending a rocket into space!

What I’m trying to say is that I always liked challenges, and what was most compelling about video games was the challenge, and the learning opportunity inherent to the industry. It also helped that I was part of the rendering team at work (initially composed of only two people) and responsible for providing the necessary tools for artists, optimizing state-of-the-art visuals for real-time.

SCN: Two years later, you made the transition into the interactive multimedia world as one of founding members of Float4. How did your expertise in visual effects influence Float4's earliest projects?
SD: The transition to Float4 came very naturally. The project I was working on at EA was called “Army of Two.” During a transition period, after the game was shipped, I was reassigned to other projects that were, from a technical perspective, not as interesting to me. I wasn’t challenged anymore, and at the same time, my university friends came up with a brilliant idea: using gaming and real-time technology to create large-scale, interactive multimedia experiences that didn’t make use of any controllers, but, instead, relied on your own body movements. I became very excited by this idea and decided to leave my position at EA and partner with my Float4 friends. I said to myself that I was still young, that it was the right timing, and that this idea would maximize my skills to a whole new level. This idea of changing the way we see digital and its different uses gave us the necessary passion and energy to jump start the company. I felt that my background in visual effects was going to be very effective in this new industry and that it would make a real difference in our new company.

As a team, we were lucky to have complementary programming talents. My addition to the team was with real-time visual effects, and quickly we were able to have organic and lifelike working prototype visuals for various contexts. Our first contract was with Cirque du Soleil for the Aqua Foundation, where we were challenged to create a real-time, fully interactive ocean simulation spanning across 10 projection screens using five custom-designed PCs. What we didn’t know was that the simulated wave had to propagate through the whole cylindrical screen and along the five servers. Others would have backed away from this challenge because it appeared impossible, but we were the only bunch who said “Yeah sure, we’ll do it!”

Eventually, and despite all the technical challenges, we persevered and delivered a successful experience. I think this first project showed us leveraging our experience and the power of real-time would prove to be a very powerful strategy in the long term. Thereafter, most, if not all, of our projects were highly interactive, and powered by various real-time algorithms and simulations. We understood the main purpose of the digital projects we were exposed to was to allow clients to connect with their audience. Interactivity in its essence is very effective at that. Paired with impactful and sophisticated content, we made sure that this purpose was well achieved throughout all of our projects.

SCN: Today you are CTO of Float4 and RealMotion, which this year released for wide use your content generation, editing, and monitoring platform. What was it that compelled you to share your software tools with other experience designers and AV integrators?
SD: Over the past decade, we’ve had the chance to work and collaborate with amazing AV partners, design firms, and display manufacturers, which allowed us break into the wonderful world of digital media, along with its immense possibilities. We quickly saw the potential of our platform beyond the scope of our projects, and were able to sell a few units back then. We are extremely grateful to those clients who purchased preliminary versions of our platform, as it demonstrated to us that it not only has market potential, but also that releasing a new product is a different game. As RealMotion began powering more than 40 projects around the globe, we kept noticing a strong interest from numerous AV and design firms. This is when we decided to take the leap of faith: apply for funding and productize RealMotion.

RealMotion then evolved; it became more than a product. It became an idea, a philosophy, a new way of thinking. We wanted to change the world of digital media by providing reliable real-time technology for all. This became our vision statement later on. We had something very cool in our hands. The node-based UI system was extremely powerful and became the foundation of the platform as it could easily be used and quickly modified by less technical users. The philosophy became fully data driven. Our commercial release now includes features such as Timeline, Video Playback, Live Pre-viz, Live Feed, scripting, HTML5, Network Audio, DMX, and so on—all based on the capabilities offered by the real-time approach. It’s like asking an Xbox to play DVD movies versus asking a DVD player to play games.

We received very positive, high-quality feedback from major industry players. This allowed us to focus on important features such as the video playback engine, the timeline, and the remote deployment tools, which, combined with the real-time and node-based approach, provided creative freedom within a professional and robust architecture.

We are extremely happy to have commercialized this powerful platform and made it available to the AV industry and creative minds. We hope it will expand the way we see digital integrations applied into public spaces.

SCN: What is your vision for the integration of digital experiences in physical spaces as more real-time visual effects tools become widely available?
SD: We want to disrupt the world of digital media by providing reliable real-time technology for all.

Within the last few years, our industry has been increasingly innovative. This acceleration is partly caused by the exponential increase in graphic computing technologies (GPUs) in terms of capability, speed, and accessibility. Real-time is the on-going disruption, gaining traction versus traditional video playback. The increase of high-resolution displays at a more affordable price translates into increasingly complex content, as well as costs of producing video assets. This cost has been putting additional pressure on real-time developments and its trending popularity, shifting the industry paradigms toward generative and dynamic content. Nowadays, you can create impressive experiences that are 100 percent developed in real-time, not relying on conventional video playback. The obvious advantage is quick iteration and the ability to adjust content dynamically using either sensors or live database feeds.

Imagine a world where you don’t need to wait hours for your rendering to be done but instead see the actual changes live within the physical space. Imagine generative and dynamic content that isn’t pre-rendered but is live and constantly changing, responding to various data feeds and interactive sensors of all kinds. Imagine large resolution displays being driven by fewer machines, but with more powerful servers. Imagine a world where the digital and the physical are seamlessly integrated. It’s not only about pixels anymore; it goes way beyond that.

Kirsten Nelson is a freelance content producer who translates the expertise and passion of technologists into the vernacular of an audience curious about their creations. Nelson has written about audio and video technology in all its permutations for almost 20 years; she was the editor of SCN for 17 years. Her experience in the commercial AV and acoustics design and integration market has also led her to develop presentation programs and events for AVIXA and SCN, deliver keynote speeches, and moderate and participate in panel discussions. In addition to technology, she also writes about motorcycles—she is a MotoGP super fan.