Despite all the turmoil and limitations caused by the pandemic over the last year-plus, one notable industry technology sector proceeded ahead at full speed—standardization work on a smorgasbord of new video compression codecs. This work, according to longtime industry veteran Matthew Goldman, a SMPTE Fellow and past SMPTE President, is permitting the long-held view of what constitutes a useful and efficient video compression scheme to evolve in order to keep up with new streaming and broadcasting requirements, capabilities, and approaches.
“Realtime video delivery requires a lot of bandwidth, which is a limited resource in almost all applications today,” Goldman explains. “The whole point behind video compression is to greatly reduce the bandwidth needed by removing redundancies, but to do it in a way that the bitrate drops, but the human visual system can’t see the difference. On the wider landscape of compression options, those that have traditionally risen to the top and been useful are those that achieve maximum compression efficiency compatible with widespread use. It was never perfect. Historically, you were greatly limited by technology—such as processing power, memory size, bandwidth, and other factors—so standards were designed around what you could realistically implement onto a hardware decoding chip. Thus, MPEG-2 Video did everything it could do to fit onto a chip back in 1994; AVC [Advanced Video Coding, MPEG-4 Part 10/H.264], about nine years later, did the same thing; and then another 10 years after that, HEVC [High Efficiency Video Coding, MPEG-H Part 2/H.265] did the same thing.
“But HEVC is now about eight years old. You can see we have been roughly on the path of developing a new codec every ten years or so that has about twice the coding efficiency as the previous one, based on technology advancements. So, when you have a big change in video coding, that is not by accident. Today, hardware processing power is no longer the single limiting factor, because a lot of things are now being done using software-defined architectures. We have moved forward with technology from silos of fixed industry-specific hardware implementations to media-processing software running on data servers, leveraging the IT industry. So, we are shifting in that sense, even if coding technology is conceptually the same thing, whether executed in a hardware or a software infrastructure. Although with technology advancements, many more algorithmic techniques—some of which we have known about for a very long time—can now be implemented, the focus has moved beyond a straightforward tradeoff between the best possible coding efficiency and what can be implemented economically in consumer electronics to include other considerations, such as commercial deployment needs, which may be a higher concern based on the specific market segment."
Consequently, in roughly the last six months, three new video coding options largely designed to provide significant advantages for different specific use cases, have been formally published as international standards.
Those three codecs are Essential Video Coding (EVC, MPEG-5 Part 1), Versatile Video Coding (VVC, MPEG-I Part 3/H.266), and Low Complexity Enhancement Video Coding (LCEVC, MPEG-5 Part 2). As with MPEG-2 Video, AVC, and HEVC, VVC was developed through a collaboration between the ITU-T’s Video Coding Expert’s Group (VCEG) and the ISO/IEC’s Moving Picture Experts Group (MPEG). Goldman hastens to add that EVC and LCEVC were developed entirely within MPEG, rather than through a collaboration with ITU-T, and that all three new codecs are independent of one another in terms of their algorithmic tools and their use-cases focus.
Each has specific advantages and focus for different kinds of applications. Viewed jointly as a trend, they illustrate, according to Goldman, that the highest coding efficiency possible is no longer the singular important factor in determining what makes a useful video encoding scheme—they have essentially been designed to address more specific needs on a landscape with seemingly unending video delivery and viewing platforms.
For example, Goldman points out that among the big breakthroughs with the EVC codec is not its ultimate bitrate efficiency, but rather the fact that its Baseline Profile is more bitrate efficient than AVC, while being royalty free, and its Main Profile is more bitrate efficient than HEVC, but with far simpler licensing requirements.
“One of EVC’s targeted use cases is really as an alternative video codec for streaming and over-the-top (OTT) use,” he explains. “The idea is to provide streaming performance at least equivalent to the workhorse existing codec [AVC or HEVC, depending on the application], but without all the intellectual property issues that [have interfered with the broad rollout and use of HEVC, as detailed in the August 2020 issue of Newswatch]. EVC was intentionally developed to have the best coding efficiency possible using algorithmic tools whose patents had either expired or were offered by the rights owner as royalty free [Baseline Profile] or with simple, straightforward licensing [Main Profile]. For instance, if you have the bandwidth necessary for HVEC but want to avoid the complicated commercial factors that come with that technology, EVC comes into play.”
VVC and LCEVC, by contrast, offer advantages in other specific categories that are significant on today’s broad video landscape.
“VVC’s target use cases are the ultimate in compression efficiency,” he says. “Think 8K UltraHD TV, 360-degree video, augmented and virtual reality, and beyond. When AVC came out in 2003, the ultimate target use case for that codec was high-definition television. When HEVC came out a decade later, its ultimate target use case was 4K UHD and high dynamic range [HDR]. But VVC is about twice as bitrate efficient as HEVC, and so, that makes it easier to do those advanced applications. It includes a variety of specialized tools, such as screen content coding for applications like Web-based collaboration, screen sharing, or gaming and graphics, which have traditionally been an anathema to [compression encoding schemes] which focused on natural imagery from the real world and not computer-generated imagery; reference picture resampling, which improves adaptive bitrate encoding schemes that are used by all major video streaming services; and independent sub-pictures, which improves the coding of 360-degree video.
“LCEVC, on the other hand, is a very different animal. It’s a multi-resolution video codec where the base layer is whatever existing video codec you are using—AVC or HEVC, or whatever. What LCEVC does is process the original video format and bring out enhancement sublayers on top of it. So, you essentially encode the original coded picture at half-resolution and then you encode the low-complexity enhancement layers on top of that. The end result is that the overall bit rate is lower than the original full-resolution version using the original video codec, while maintaining equivalent or improving the picture quality. So LCEVC is like a steroid for [older] codecs, such as AVC and HEVC.”
Parenthetically, Goldman mentions that recent years have also seen the development of an open, royalty-free video coding format called AOMedia Video 1 or AV1, from the Alliance for Open Media. Goldman says AV1 is essentially “a high-performing codec that was developed as a claimed royalty-free alternative to international standard codecs for web-based applications. Its coding efficiency is similar to the EVC Main profile.”
For the three new internationally standardized codecs, however, Goldman cautions that “it is still early” relative to their rollout, but that their potential to widely impact the video playing field is huge.
“After standards are released, we always do verification testing,” he says. “This is followed by the release of reference software and conformance streams, which provide a reference implementation and testable streams for development and testing of products. In the case of VVC, verification testing was completed last October and we expect there will be reference software and conformance streams available by this October. This means we will soon be able to provide a reference implementation and testable bit streams to do development and testing of products against. EVC and LCEVC are going through similar processes. So, these standards are out there, and people are already starting to build software solutions and equipment for them. But obviously, to make fully functional products widely for the industry and offer such products and solutions in commercial services, it will be a year or two, maybe longer.”
Goldman adds that all three of the new international compression standards will be compatible with looming data transmission mechanisms, including the much-ballyhooed 5G cellular transmission network for example.
“These codecs are independent of the transmission media,” he relates. “From a codec point of view, 5G is just another ubiquitous way of delivering the data, and it has the advantage of providing higher bandwidth than previous cellular technologies, with lower latency, so 5G also has a lot of use cases associated with it. Transport mechanisms are also evolving. While it is still ubiquitous today, the venerable MPEG-2 Transport Stream—which enabled the delivery of real-time video streams for digital TV applications over 25 years ago—is being replaced by adaptive bitrate streaming techniques that leverage Internet protocols [IP]. This includes Dynamic Adaptive Streaming over HTTP [MPEG-DASH] or the Common Media Application Format [CMAF], among others.”
He elaborates that these compression-related developments will eventually end up being central to the practical utilization of new forms of high-volume data capture, such as light-field capture technology, for instance. A future version of VVC, for example, might make it simpler and more efficient to compress light-field imagery so that it could be widely transported for production and viewing, Goldman suggests.
“You want to take the data you have and compress it, but to do that, you might have to organize it in a better way,” he says. “So, they are trying to find simpler ways of representing all the many images that are captured in a light field in order to compress them better. That work is very active right now.”
And speaking of new technology developments that can impact the compression universe, Goldman adds that forms of machine learning are already making their way into the development and use of compression schemes. The conceptual ability is already there, he emphasizes, but many techniques are not yet practical yet due to the volume of processing power required. However, he says the industry “is actively looking into deep neural network video coding” as we speak.
“It could possibly be part of [a future version of] VVC,” he suggests. “In addition, some codec implementations are starting to use machine learning to improve the way they prepare and encode video. A couple of them have made strides in the case of using machine learning to improve filtering at the front end of a video coder, or to do up-conversions from HD to 4K or 4K to 8K, and so on. With up-conversion, you are essentially ‘inventing resolution,’ because you are making something artificial that doesn’t already exist, and machine learning can do that really well. However, all these things take gobs and gobs of bandwidth to accomplish, so we are in the early days of taking advantage of machine-learning techniques.”
Goldman hints that there are other exciting compression developments brewing, as well, but his primary point is to emphasize that “we now finally have a series of additional video coding options that are likely to provide significant advantages for several important use cases, as we have discussed. Now, I can have coding that is good enough for my application, but not have to worry about restrictive commercial models. Or I can encode 360-degree video more efficiently or screen content more efficiently. As with the LCEVC case, for instance, I can improve or extend the use of existing codecs to lower the bitrate, while maintaining picture quality.”
“So, my belief is that AVC and HEVC will continue to be workhorses for the foreseeable future, and LCEVC might provide an opportunity to extend them even further,” he adds. “And now, we will have VVC being the ultimate option [for very data intense UHD, 360-degree video, or virtual reality] content. EVC, due to its commercial use benefits, could eventually replace HEVC. But the most important thing is that we continue to have a particularly strong partnership between MPEG and VCEG. Those two bodies working together will drive VVC’s future versions, or even beyond VVC, to make it possible to do some of the future things we have been talking about. Because, after all, the key thing is that we are continuing to expand our universe of knowledge, and to do that, we will continually have to capture big amounts of data. At the same time, we can expect bandwidth to have its limits. So, we can presume that compression will continue to be massively important, no matter what technology is developed next.”
IBC Pushed to December
TV Technology recently reported that the in-person 2021 IBC show has once again been pushed back, but will, in fact, happen. The report suggests the original plan to hold the show at the RAI Amsterdam Convention Centre in September was a bit optimistic in terms of recalibrating the show’s format and logistics in order to comfortably comply with health-and-safety protocols. Instead, IBC 2021 will be held 3-6 December this year. The article quotes IBC CEO Michael Crimp as saying “safety and readiness to engage” were the main reasons for the additional postponement.
IFA Berlin 2021 Scuttled
Meanwhile, an uneven global recovery from the COVID-19 pandemic continues to play havoc with the technology world’s in-person events. According to a recent report on the Pro Video Coalition site, for instance, a desire to “avoid taking risks” has caused organizers of the IFA 2021 consumer electronics show to cancel their planned IFA Berlin 2021event, originally planned for September, and replace it with a virtual version. The article says the rise of COVD-19 variants in Germany and elsewhere was the primary reason for the decision. It states that organizers expressed concern over “new global health uncertainties,” suggesting that despite global vaccination success overall, things were not moving “as fast in the right direction as hoped for.”
ASC Re-elects Lighthill as President
The American Society of Cinematographers (ASC) Board of Governors announced recently that the organization had re-elected Stephen Lighthill, ASC, to serve a second consecutive one-year term as ASC president. The ASC re-elected Lighthill during a vote for its 2021-22 slate of organization officers. He will serve along with ASC vice presidents Amy Vincent, Steven Fierberg, and John Simmons. Lighthill is a well-known documentary cinematographer with a long list of television dramatic credits, as well, and was recipient of the 2017 ASC President’s Award.