<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

Compression’s Complex Evolution

May 31, 2022

When Ian Trow last spoke with Newswatch in 2020, he pointed out that the COVID-19 pandemic was starting to have a radical effect on media workflows, and consequently, cloud-hosted solutions were rapidly being adopted. Almost two years later, Trow—an industry consultant who is SMPTE’s UK Section Manager and author/instructor of SMPTE’s upcoming Essentials of Video and Audio Coding (EVAC) virtual course, as well as its current course, Understanding Media in the Cloud 3.0—feels even more strongly about these pandemic-prompted workflow changes. “Cloud usage is now being refined and made permanent in light of this experience,” he points out. He adds that the slew of transitions the compression industry has undergone since the pandemic began has caused a complete sea-change in approach for all those responsible for the development, deployment, and operation of media infrastructure. More broadly, he says the entire video compression landscape is currently undergoing a fundamental transition in terms of scope, options, available codecs, workflows, and publishing tools.

“What happened was a lot of content aggregators and broadcasters suddenly started using technologies, applications, and workflows they had never even considered using, forced by the pandemic to do so,” Trow explains. “This, in turn, has accelerated people’s thoughts on remote production. During this time, people have done extended proof of concepts on workflows, and very quickly realized what works and what doesn’t work, and also gotten a good idea how expensive hosting solutions as applications run on enterprise or commercial off-shelf infrastructure can be. I think this had two effects. First, the experience caused a lot of people to look very carefully at how they can extract the best in terms of best practices and technology for solutions going forward. And second, I think where there previously was some hesitancy toward virtualizing infrastructures, now people feel better equipped to do so. I think this has greatly accelerated the trend for media applications to progress and legacy workflows to be adapted to facilitate virtualization."

However, he adds, much of that work during the pandemic to figure out how to move media content around remotely using Cloud-based techniques revolved around “making content exist for episodic and cinematic applications. And so, we learned a whole lot about hosting workflows through the pandemic, but that was mainly for episodic and cinematic file-based content. Now, we need to learn how to use all those good applications, but in a more interactive and live environment. People now are trying to figure out how to carve up their existing workflows to learn how to host content either in the cloud or virtualized on-premises infrastructure. They like the flexibility and scaling this kind of hosting brings, and now they want to apply it to more than just episodic and cinematic uses. In a traditional legacy workflow, it used to be about modifying content as it moved serially through a workflow. The big thing about the cloud now, as discussed in theMovie Labs 2030 Vision White Paper , is that people want to bring media applications to the content, which means there needs to be a fundamental change in the way workflows are implemented. If you bring the application to the content, there are a whole range of business, operational, and workflow benefits to embracing virtualization and cloud hosting, but you have to be prepared to adapt and learn!”

The need to efficiently compress, package, stream, and publish video content on multiple platforms for multiple applications in such a virtual world is the foundation of any such workflow, and so Trow emphasizes that new ways of looking at compression specifically are crucial. As a result, as part of his work developing his upcoming SMPTE course, Trow recently engaged in a project to evaluate existing compression codecs and “take a 3,000-ft. view of the entire codec market, the applications, and the hosting. This forms one of the major goals of the up-and-coming SMPTE Essentials of Video and Audio Coding (EVAC) course—essentially summarizing what is the state of play now in compression and speculating what the future of the video codec industry might be."

During development of the EVAC course, one general conclusion Trow quickly came to that helped him contextualize many of his other conclusions is the notion that video compression schemes are no longer mainly about video quality (VQ), per se—there are many other factors to consider. Another is that Artificial Intelligence (AI)/Machine Learning (ML) will play crucial roles going forward both in optimizing existing schemes and in creating and implementing new ones. And a third general conclusion is that Trow believes we have left behind the dream of implementing and relying primarily on a single, universal compression scheme, even despite the ongoing relevance of the widely established H.264 (AVC) codec. He is firmly convinced that an ever-evolving menu of codecs with different strengths and focus will be the way to go moving forward, depending on what application a particular video stream is intended for.

“The growth of compression lies in the growth of applications,” he explains. “Consumers are clearly moving away from Direct-to-Home [DTH] broadcast and are seeking to consume material in different ways on different devices, with increased emphasis on widening the availability of media content, basically addressing the need to scale distribution.” 

There are five particular codec applications, he suggests, that represent how the live market for distributing video is expanding. These are DTH scheduled for live presentation channels, OTT scheduled for live channels, live social media redistribution, live sports syndication, and live outside broadcast news.

“This reality has two important effects. First, there is a huge demand to improve streaming and social-media quality of experience, quality of encoding, and second, for broadcasters, their content is being used in more diverse ways in order to fully monetize it. So, things like metadata and content tagging are far more important to automate in order to speed the use of content in alternative hosting platforms. Codec hosting will largely be cloud-based, rather than running on specialist hardware. Right now, we are somewhere in-between software running on commercial, off-shelf hardware and being virtualized, but it is going in that direction. So, codec applications and codec hosting will be crucial to making the right compression choices going forward.”

Thus, he elaborates, as video applications and platforms grow, the universe of codec standards is vigorously growing with it. Indeed, he adds, a veritable potpourri of standards exists right now. 

“H.264 [MPEG-4/AVC] remains the vanilla standard that comes closest to being universally applied, but subsequent ITU/MPEG standards have seen success in specific applications, thereby fragmenting the codec applications addressed. In the fullness of time, successor standards like HEVC [H.265, High Efficiency Video Codec could well take over for H.264,” he explains. “But to date, these standards have achieved limited success—an example being HEVC, which heralded the introduction of format handling up to and including 8K, as well as specific support for HDR. Still, I expect H.264 to remain significant in the industry for at least the next five years, requiring further development and optimization to enable it to be used in a virtualized cloud-hosted environment. VP9 came along from Google as an open-source, royalty-free alternative approach for streaming/hosting/browsers and players. VVC [Versatile Video Coding] was developed by the ITU as a successor to HEVC for large or complex data transmission. And now, we have AV1, also open-source and royalty-free for Internet broadcasting—a response to the royalty debacles surrounding other compression standards.” 

But Trow emphasizes this group only scratches the tip of the full compression iceberg, because then there is also a group of specialty or mezzanine standards for use in production, distributing content around a facility or, as he refers to them—“the more interesting standards.” 

This category now includes a smorgasbord of options ranging from JPEG-2000 (a wavelet-based scheme optimized for efficiency); TICO (for low-latency work);  JPEG-XS (visually lossless/ultra low-latency for mezzanine compression); and LCEVC (Low Complexity Enhancement Video Coding as an enhancement scheme in combination with a base codec), among others. All have particular strengths and focus, depending on how they are being used. In particular, Trow points to JPEG-XS as “a game-changer in terms of realizing the dream of direct ingest of compressed content to LAN/WAN networks, such as on the back of cameras or servers, for example, for production. Development focused on optimizing for a small footprint codec, which relaxed the demands of compression efficiency to reflect that studio networks can now more readily handle high data rate feeds.” 

No doubt, more codecs are on the way and further optimization of existing codecs will be ongoing for a wide range of applications, including some not yet even conceptualized. But based on his work for the upcoming EVAC course, Trow emphasizes that there are some common conclusions to absorb about compression generally on the modern landscape. First, there is not and likely never will be a single codec to fit all applications. Second, the traditional ITU/MPEG path to compression codec development solely based on compression efficiency now has strong competition from a growing list of alternatives aimed at addressing additional priorities, such royalty-free and license free options, streaming and file optimized options, as well as facilitating the search and tagging of content. Third, codec-based workflows now have to demonstrate compatibility for common browsers, players, and other devices. Fourth, AI and ML automation capabilities will be central to future development, optimization, and monetization of content. Fifth, most legacy codecs and standards will still need to be supported or improved for the foreseeable future. And sixth, virtually all compression codecs need to be optimized for cloud hosting going forward to remain relevant.  

Indeed, at the end of the day, Trow emphasizes that streaming will be “where the action is,” if it isn’t already, when it comes to development of compression schemes, and that represents the biggest pivot of all away from historical development patterns. 

“What we’ll see is far more widespread usage of different hosting techniques, be it private cloud, private cloud on premises, or full-blown cloud hosting. Instead of development being oriented toward DTH workflows, as has been the case in the past, streaming and particularly social-media hosting will be the primary focus, reflecting consumption patterns for a generation of viewers considerably younger than me!”

Tag(s): Featured , News , compression

Michael Goldman

Related Posts