<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

Compression’s Complex Evolution, Part 2

May 24, 2023

About a year ago, Ian Trow, a longtime industry consultant and SMPTE’s UK Section Manager, detailed for Newswatch readers how media workflows were moving at an accelerated pace into the cloud, largely as a result of how the Covid-19 pandemic had radically altered the landscape. As a result, he emphasized, the development or improvement of various kinds of compression schemes had likewise grown more crucial to the media industry than ever before. Now, a year later, Trow points out that with those changes proceeding apace, the challenges of sorting out various business models and choosing widely ranging ancillary compression options in a multi-platform world have become more important, as well.

“Things have settled back now in the sense that a lot of people are now comfortable with the cloud,” Trow explains. “But now, other areas are more of a challenge for cloud providers. Should you deploy fully on the public cloud, or are there bits of functionality that you need to hold back for the private cloud, or to keep on premises? Everyone has sort of rushed through to the ability to deploy cloud-based technologies, but now, they are beginning to get a sense of reality in terms of commercial cases and also technical limitations.

“So now, what we are seeing is that the appetite for raw compression efficiency is diminishing, and what is increasing are the sidebar features to actually enable the content to be used. That refers to things like indexing, packaging, and so forth that all facilitate the need to make that content exist on a huge variety of platforms. I think that issue is more prominent now.”

Since, as Trow puts it, “the future is streaming for sure,” the issue of image quality, and thus how much and what kind of compression to apply to image files, is a thorny one, since content creators and distributors are facing an increased demand for streamed content but involving a constantly growing number of platforms, formats, and displays that vary widely.

 “If you look at what is acceptable quality for streaming, there is a distinct shift between availability and quality,” he says. “That used to be only apparent with news. People would have a lower threshold of criticism for low-quality newsfeeds if it was a here-and-now event. But what we are now finding for media distribution is that [standard definition] can be acceptable for some types of streaming, but high definition is the knee point where people are saying that they must preserve HD quality and guarantee that even if the material is sourced in UHD/HDR, that the HD asset is of premium quality.

“Then, you step up to the level of UHD/HDR, and that is where you get the critical eyes now. It’s not only about the format, but also about color processing, high dynamic range, and many other factors. As resolutions have increased and formats have increased, so people have changed their attention. Most of the critical focus is on UHD/HDR material, but the underlying fact remains that only a [small number of consumers] are enabled for that format, so most attention remains on the question of how to retain HD picture quality when streaming.”

Trow emphasizes that the H.264 (AVC) codec remains the most widely used compression scheme for streaming, broadcast, and VOD work. But he adds that there is an increasing push toward what he calls “a fragmentation of compression standards,” including the growing popularity of the open-source, royalty-free AV1 codec in this area. This “fragmentation” also includes increased reliance on so-called mezzanine schemes for specific niche requirements. Some of the popular schemes in this category include HEVC (H.265, High Efficiency Video Codec), which Trow says is highly effective for UHD transmssions, but is still grappling with licensing issues.

 Others include VVC (Versatile Video Coding) for complex transmissions of high-resolution material; the wavelet-based scheme JPEG-2000; JPEG-XS (visually lossless/ultra-low-latency for mezzanine compression); LCEVC (Low-Complexity Enhancement Video Coding), to be used in combination with other codecs; and others, with even more on the way.

 In other words, Trow suggests that one important ongoing compression trend is the simple fact that the fragmentation of the compression world for both production and distribution formats has led to HEVC losing what he calls its “ubiquitousy” across all formats and delivery platforms.

 “Five to 10 years ago, we were seeing in the market a convergence toward HEVC, but now, with the proliferation of video over many different platforms, which accounts for about 70 percent of all Internet traffic, people can consider specific compression schemes and particular packaging and streaming platforms,” he elaborates. “This has enabled the viability of many distinct compression packaging and delivery protocols that are particularly attuned toward particular markets.”

The biggest frontier for cloud-based workflows to conquer, however, lies in the realm of live production, particularly for sporting events. This reality connects directly to the business model issue. As Trow puts it, until recently, “cloud providers were orientated toward egress—selling CPU, storage, and network capacity to allow clients to actually process video in the cloud. But now, they are setting their sights on live scenarios.”

This requires the establishment of an “ingress model,” as Trow calls it, rather than the more traditional “egress model” for cloud providers. 

“A lot of cloud providers are now concentrating on how we change our business models and our technologies in order to deal with a more balanced approach to ingress and egress,” he says. “Previously, a lot of cloud technologies were only focused on egress, where an asset is uploaded to the cloud, you post-produce quite a lot of it within the cloud environment, and then you basically hand it off as a distribution asset.

“But now, we are talking much more about the economics surrounding ingressing into the cloud live events, and that poses a challenge in terms of on-prem-to-cloud bandwidth, as well as the real-time aspects of handling live events. A lot of people are concentrating on that kind of environment now.”

And in that environment, historically, bringing full bandwidth video into the cloud for processing “has always been a problem,” Trow emphasizes. However, now, as discussed in Newswatch earlier this year, live production in the cloud is becoming increasingly viable thanks to new techniques such as editing on the ground using proxy media and time-stamping to make those decisions frame accurate with high-quality, uncompressed video in the cloud, and so on.

Trow says these various mezzanine compression schemes now make it easier to edit in the cloud on less than full-resolution video, and apply the results to the original, uncompressed high-resolution video. A wide range of specialized, niche techniques will thus play an important role in this new paradigm, he adds. For instance, Trow says that essence-based processing is becoming increasingly popular to reduce a project’s compression needs, whereby video, audio, and metadata signals are separated and then put back together following a time-stamped map, meaning “you can process, search, and retrieve content based purely on the metadata, without having to de-multiplex all associated video, audio, and sub-titles.” 

He also says that artificial intelligence/machine learning are “already a central factor” in improving compression options and making them more flexible.  

“With each successive compression scheme, it slows down inertia to be able to actually switch to a new compression scheme,” he says. “That is what resulted in H.264 becoming so dominant. But AI and machine learning are dealing at a far more holistic level with all the content attributes that are actually available for particular video content. That means you have to work much harder to get that same asset encoded using pre-existing compression schemes without AI. AI can analyze video assets at a higher level, whereas most compression schemes are looking at the very best of a group of picture structures. AI lets you look holistically at the entire asset, and that allows you to make global decisions that can achieve efficiencies that individual compression schemes do not have the kind of visibility to be able to do. There is now a whole industry associated with AI and machine learning that can do content conditioning in a way that is over and above what was possible within existing compression schemes. And people are using AI to essentially tune high availability assets to increase their scalability on the Internet. That is a big addition, and it is well underway.”

Trow hastens to add, however, that the business side of the compression industry remains committed toward pointing users to particular schemes or techniques requiring licensing and royalty payments, and much more that leaves the more flexible future of compression still far from being a fait accompli.  

“A lot of manufacturers or consortiums back certain proprietary approaches to compression and also metadata, automation—the whole expanse of features that are needed to create a full solution,” he says. “In fact, generally, there are two schools of thought. One is the standardized approach, where there are basically standards produced both for network discovery, packaging, and for compression, and then you form a solution based on those standards. And the second idea is that there are proprietary mechanisms that are supported by specific entities, consortiums, or manufacturers. They claim it is easier because they provide essentially one-stop shopping to get to an overall solution. From the standardized point of view, you need a huge amount of knowledge to resolve all the solution gaps existing between various standards. The proprietary solutions are generally targeting specific markets that you either align with or you don’t, and then you accept certain compromises.

 “However, what we are seeing within the marketplace is that some people feel the proprietary way is so narrowly focused that they do not want to adopt it, so they try the standards-based approach, and vice versa. So, we are still in a state of play here where nobody knows for sure how to move forward to [a more ubiquitous flexible menu of choices], and so far, there is no comprehensive winner in that marketplace. People are still very indecisive in this area.”

In any case, Trow emphasizes that the need to strategically educate the industry, and those about to join the industry, is “a huge issue” in getting all this sorted out. He points out that “an aging demographic” that comes out of traditional broadcast are largely the ones currently grappling with these issues even as “new school IP-streaming” rises to the fore.

“That’s an issue most broadcasters are challenged with addressing, particularly in terms of bringing on people with new skills to understand and launch applications and solutions that address this area,” he states. “As I’ve noted, the future is streaming, and that means the future will be associated with content that can be quickly ingested, tagged, and made available to other content aggregators to re-use that content. And this means broadening the appeal from thinking just about traditional broadcasting to social media.

“So, we really need education and skillset training. That is currently a big problem within the industry. With all that is going on, we have a problem bringing in new blood. I think it is critical that organizations like SMPTE and others actually spend time concentrating on how we are going to impart this information to the younger generation, so that it is not just people of an [older] demographic who are trying to implement cutting-edge streaming and compression applications. People who are making decisions within big broadcast or content aggregator organizations really need to be multi-skilled and able to make proportionate decisions based on a huge range of criteria. It's no longer simply about compression or streaming standards—the devil is in the details.”

Tag(s): Featured , News

Michael Goldman

Related Posts