Cloud Power: Predicting the Future

Cloud Power
(Image credit: Future)

Selecting the best environment—cloud, on-prem, or a combination of the two—is a major fork in the road of any Pro AV facility. However, finding the best financial and operational fit is more often than not a complicated mix of art and science. Let's outline the most important, top-level cost and performance considerations for facility owners and consultants to keep in mind when searching for the wisest upgrade path.

Cloud-Based Production: Opportunity or Threat for Integrators?

Predicting the costs associated with a move to the public cloud in an existing plant is a complex topic in the context of comparing to an on-premise physical infrastructure. This is especially difficult when you consider how hard it is to determine the true cost of operation in a completely physical infrastructure. Today, when we do onsite installation, several topics normally do not require analysis, including local power consumption, heat dissipation and its effect on HVAC costs, the amount of square footage occupied by equipment, and whether that space could be repurposed for another use.

The easiest place to start to compare costs is always a greenfield situation where you can watch costs emerge from a blank slate. But as we all know, greenfield installations are rare. Usually, we are retrofitting an existing facility for either greater capability or a technology uplift because the current equipment is outdated.

In-Cloud Versus On-Prem

When is the best time to analyze the financial ramifications of in-cloud versus on-prem? As a systems integrator, it’s important to remember to compare the costs of different methods and locations of operating in cloud for your client. It may be a different cost structure in Google Cloud versus AWS versus Azure, or even in the client’s own private cloud.

When factors like latency are critical to the working environment, performance becomes a determining factor that overrides cost projections.

All these things must be considered, and the matrix becomes complex. Additionally, you must consider what is your client's existing technical infrastructure? Their refresh cycle? How often do they expect to refit their facilities with new equipment and technology? That becomes part of the hard cost calculation as well as a soft cost calculation, since the cloud is so flexible when upgrading technology.

For example, if your client is purchasing a video switcher and their needs change, they can no longer make do with a 2 M/E switcher. On premise that means replacing a large, expensive piece of equipment. But in cloud, that is much more likely a software option or changing the software package of an item you're paying for by the month. That results in no longer being tied to capital depreciation for capability changes. This is one of the more enticing features of software-defined infrastructure in public cloud-based infrastructure.

Looking at the software cost considerations in cloud, I may change models or change manufacturers without penalty. Hardware costs like compute and storage are amortized over a long period of time, as they are in a multi-tenant shared environment. However, if I am on-prem with a software-defined infrastructure and I need greater capability than the current hardware provides, I will still sustain additional hardware cost. This all adds up to a very complex calculation. And again, as with many things in our industry, sometimes it's as much art as it is science.

When we do this kind of cost comparison, you also must include a benefits analysis, because sometimes cost alone doesn't give you the full picture. Benefits can often outweigh costs. For example, you may have a client that needs to start in the most cost-effective configuration possible, but believes they will have growth needs over a short period of time. This would steer the decision toward software-defined and, if possible, cloud-supported, because the cost of change is dramatically lower than an on-premise installation with physical hardware. Some of that can be quantified by making comparison cost tables.

[Cloud Power: Decisions, Decisions]

The same cost comparisons come into play for both on-prem and in-cloud: staff size, design time, project management, and installation/deployment. And then, of course, even cloud technologies include a component of being physically grounded. There’s the cost of operating the link between people in both directions. Where you're operating from and how you're going to amortize those physical costs must also be factored in.

Hybrid Considerations

Lastly, there is the appropriateness of technology. Even though you might want 100% of your infrastructure to be in the cloud, it may not be possible. You need to look at latency, which is governed by how fast electrons and photons move across the various forms of transport. When that’s a huge point of concern, it will often lead to hybrid systems where again, the complexity of cost calculations adds another exponent.

One clear example of the role latency plays is IFB. In many broadcast applications, the talent wants to hear the program feed and their own voice—all in context in their ear. The amount of time it takes for the program feed to travel to them, if it's within reason, is irrelevant. The amount of time for the director's voice to travel to them, and again if within reason, is irrelevant.

But the amount of time it takes for their own voice to travel back to them is very relevant. If the delay is too long, it’s disorienting to the talent. This is a great example of parameters that lead to hybrid systems. We must keep certain signal paths on the same location of where the material is originating to minimize latency for the purposes of talent monitoring.

The most common challenge that creates hybrid deployments usually lies in the audio part of the stream. Latency in audio is extremely difficult for humans to work with. When I speak, I hear myself in the context of the environment around me at a certain period of time. The context is determined by the reflectivity of the walls where I sit and hear my own voice returned to my ears.

[NAB 2023: Dave Van Hoy on VR Production, Cloud Technology]

When that latency does not match what your eyes perceive in your physical environment, this becomes very disorienting. So, when developing a system where the talent is hearing themselves monitored back, we must keep the latency to a minimum or the talent will hear themselves as an echo in their own ears. And that is a very difficult way to perform. When factors like latency are critical to the working environment, performance becomes a determining factor that overrides cost projections.

Determining where a client’s needs will be over time is where a systems integrator’s clients rely most on their experience. It’s the knowledge gained from experience that points them to the optimal place to be at any given moment in time. It remains a little bit of art, a little bit of science. In my next column, I’ll share examples of cost tables and analysis methods, including how to compare against differing equipment amortization rates.

Dave Van Hoy

Dave Van Hoy is the president of Advanced Systems Group, LLC.