<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Early Registration for the 2024 Media Technology Summit Is Now Open

Next Generation Live Production in the Cloud

February 23, 2023

Interesting innovations and methodologies related to live media production in the public cloud are abundant these days, thanks in part to the impact of the Covid-19 pandemic, which served as a sort of catalyst in terms of live remote broadcasting. According to Mike Coleman, a veteran broadcast engineer and co-founder of Portland, Oregon-based Port 9 Labs, the problem is that most current cloud-based live production solutions are hybrid, relying heavily on desktop apps and compressed media rather than cloud-native methodologies. Port 9 is a company working to develop new architectures for live remote video broadcasting in an attempt to help flip that paradigm. Coleman admits it is still early in the company’s development of its own cloud-native architecture for doing production in the cloud, and that the industry will be slow out of necessity to evolve in that direction generally, but he nevertheless believes such a transition is inevitable.  

“Right now, we have lots of lift-and-shift going on,” he explains. “That means people are moving existing ground-based solutions into the cloud. Since the [pandemic], people have been under a lot of pressure to take what they already have on the ground and incrementally change it to somehow make it work in the cloud. But they are starting to realize their limitations, and the industry is starting to understand it needs to adapt. So, if you examine how a cloud-native service would be built, it would be radically different than the architectures you are seeing for these lift-and-shift kinds of things. In other words, for now, there is a big disconnect.”

Coleman and his colleagues recently penned a SMPTE Motion Imaging Journal article on this topic, and also gave the first public demonstration of their company’s proposed cloud-based live switching technology at a recent PDX Video Tech Meetup event in Portland. The approach they discuss in the article and which they demonstrated at the PDX event departs from the typical method of using a cloud-based switcher app via remote desktop. Instead, it involves a two-part methodology. First, a cloud-based switcher is used to work on high-quality, uncompressed video in the cloud, while ground-based switchers operate on low bitrate proxy media sent to the ground from the cloud. Time-stamped commands from the proxy switchers would then be applied to the cloud switcher in a frame-accurate manner with minimal latency.

“In other words, you get the quality of uncompressed video in your broadcast, even though you are switching compressed proxies on the ground,” Coleman adds.

He emphasizes this process still has a way to go, and even further to travel before it would ever become ubiquitous in the wider industry. For now, he says the primary initial goal is to simply “sensitize people so that they can become aware that something like this is possible.”

What is it that is “possible,” in Coleman’s view? He believes it is now possible to build IP-based media systems that can be used via public cloud services and says his company has had success moving uncompressed video on multiple public cloud systems using multi-flow UDP (User Datagram Protocol). As described in the SMPTE Journal article, “Cloud IP media services would be managed as SaaS [software as services]. Broadcasters would control the programming from anywhere they choose, but the underlying service will be maintained by the service provider.” 

“It’s definitely an over-the-horizon thing and will likely take many years to get there,” he adds. “But, in our opinion, cloud architecture, if done correctly, would be totally different from how things are done on the ground, since the whole point obviously would be to leverage the strengths of the cloud.”

Coleman says that while many large broadcast companies would still want and be able to afford to own and operate sophisticated IP-based studios, this cloud-based process would be an important supplemental approach for special events, disaster coverage, and so on. And smaller broadcasters could dive fully into IP-based live production without the need for investing in sophisticated ground-based IP media studios.  

At the center of the quest to head in such a direction is the challenge of how best to improve the quality of video produced and managed in the cloud, as compared to uncompressed workflows routinely used in ground-based studios. Coleman says good approaches already exist, but they still rely on the need to compress video.  

In particular, he points to the NDI (Network Device Interface) video over IP transmission protocol now owned by Vizrt, which enables users to share lightly compressed video, audio, and metadata across existing IP networks—a solution that has been heavily used by corporate video content creators in recent years. Now, broadcasters are starting to utilize it for live sports and other events. Click here for one recent example. 

“Basically, NDI was the first way developed for getting IP video running in the cloud like a corporate studio without requiring a standard like SMPTE-2110, which is what a typical ground-based production facility would use,” Coleman relates. “Generally, it is pretty good for its purpose and pretty easy to move up into the cloud, but even so, the video quality isn’t quite up to modern broadcast standards since it still requires 8-bit compressed video. Studios typically would prefer to compose video in the highest possible quality and then use compression later only for the transport phase. 

“However, because of the pandemic, people did start using NDI in the cloud, compressed for TV news and sports, because it was the pragmatic thing to do. For the most part, users and the people doing the actual video switching are all pretty happy being able to use that approach in the cloud from a desktop connection. But that is still more or less a lift-and-shift type method in terms of trying to treat the cloud and the ground the same way.” 

According to Coleman, closer to a cloud-native approach is Grass Valley’s Agile Media Processing Platform (AMPP), which was designed to run in a public cloud.  

“That’s what I consider to be an ‘adapting’ system,’ which is more cloud-native than the lift-and-shift approach,” he says. “But it is still kind of in the middle between lift-and-shift and where we think it has to go in the sense that the customer is still involved in handling the orchestration of what they need, and then the platform handles some of the back end for them. They still want to preserve their on-premises assets, which means this method is sort of trying to make ground and cloud operate the same way.” 

Coleman says one key to creating a true cloud-native architecture for broadcasters to use when producing live content involves approaching the concept of an IP-based workflow differently by taking an “asynchronous rather than a synchronous approach.” 

“Today, in an IP-based studio, like with most IP-based things, you need extremely tight timing,” he explains. “Everything has to be synchronous using the PTP [Precision Timing Protocol] to [synchronize all devices on a computer network]. What is particularly interesting, though, is that in the cloud, that is really hard to do, and we have begun to realize you don’t need to do it, because you typically have a huge amount of bandwidth and tons of CPU available in the cloud [when using a major cloud provider]. So, instead, we want to work in an asynchronous model, only synchronized on the edge if you need it to be. We are working on an architecture that works without being synchronous because everything is time-stamped. We call this having a looser time model so that we can work on uncompressed video in the cloud.” 

On the issue of compression, Coleman highlights that functionally speaking, when dealing with major cloud providers, “there is really no problem dealing with uncompressed video in the cloud. We have done tests with different public clouds, and it works fine. There are tons of bandwidth up there. The problem, of course, is egress—what it costs. Cloud providers will charge you a lot of money in terms of data transfer fees. Therefore, typically, you do not want to send your uncompressed video back down to your facility on the ground. Our solution for that is to send only proxies down to the ground—that’s where we would use compression. Broadcasters are already very familiar with using proxies in their workflows.” 

Coleman emphasizes industry standards work will need to be done in order to ensure multi-vendor interoperability of different applications and tools in the cloud. He says that the existing standard for moving media content over IP networks at broadcast facilities—SMPTE ST-2110—“is simply too tight in terms of timing” to work as a formal standard for live media production in the cloud, but adds that the Video Services Forum (VSF) has gotten a jump on that work with its Cloud to Ground/Ground to Cloud (CGGC) working group, which launched in 2020.

Among other things, that working group is examining where a common set of requirements for ground-to-cloud and cloud-to-ground data transfers would be necessary, and how best to establish a common technical approach for such data transfers. Coleman also adds that working group “is embracing the idea of the timing model being looser in the cloud, so in my opinion, they are moving in the right direction by focusing on the data plain, or data transfer area.”  

Coleman elaborates that the general CGGC approach includes figuring out a way to allow every primary cloud provider “to have their own way of transferring media, but to establish a common API that allows them each to do so.” He says Amazon Web Services (AWS) has been the first major cloud provider to implement “something along those lines in terms of a common API, called Cloud Digital Interface [CDI], and I expect others to follow suit.” 

“CDI is really an API that says, ‘here is some video we got from another machine,’ or ‘send this video over to another machine,’ and you don’t have to think about the network,” he says. “To me, that’s a really good idea, where you hand a bunch of responsibility to the cloud provider and make it an API thing. The idea is for cloud providers to simply get data from one machine to another the fastest way possible, however they choose to do it.”

In any case, no matter the tools, API’s, or standards developed in the near future, Coleman emphasizes “it will take quite a while” for the industry to pivot with gusto to cloud-based live production.  

“Broadcasters are still in this process of continuing to try incremental changes to their workflows in order to keep working as they move into the cloud,” he says. “What I’m saying is that an incremental approach won’t ever get you where you want to go. So, at some point, you have to make a big break. Before they can make that big break, they have to understand how it could work using [a cloud-native process]. Coming to grips with that will probably take a few years. I expect there may be a transition period of about five years before broadcasters are really using the cloud the way it ought to be used for live production. But I do think it is inevitable that it will happen.”


Michael Goldman

Related Posts