<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

Why cloud technology is driving better video quality

January 5, 2021

New media applications such as virtual reality (VR) and 8K are introducing computing challenges from a resolution perspective and from a codec perspective. Combine that with the increasing ubiquity of cloud computing, and you have the perfect conditions for the evolution of better video technologies.

The different types of encoding resources

Successful and efficient storage and broadcasting of HD and HDR videos require hardware and software to manage video encoding. They can be appliance-based, software-based, or cloud-based, or a mixture of each.

  • Appliance-based encoding uses fixed-function computer resources (CPU/GPU/FPGA/ASIC) and is limited by hardware capability.
  • Software-based encoding is run on various programmable resources (CPUs) and is also limited by available hardware resources.
  • Cloud-based encoding takes advantage of elastic computing resources (CPU/GPU) and is only limited by node availability.

As resolutions increase and as more complex codecs are developed, the way encoding resources are handled will need to change.

The challenges of complex video codecs

At the FIFA World Cup in 2018, matches were encoded in UHD-1 in HEVC using an appliance-based solution. At the time, the fastest available appliance was based on a high-end Intel dual Xeon processor and a very powerful graphics processing unit (GPU) card.

But now, there is a range of new and more complex codecs available for broadcasters, including:

  • AVC, that is based on the Harmonic PURE compression engine.
  • HEVC, that is also based on Harmonic PURE, but has 50% better compression than AVC.
  • AV1, developed by the Alliance for Open Media that is on par or even better than HEVC.
  • VVC, which is still under development with the aim of reaching 50% better compression than HEVC.
These new codecs enable broadcasters and service providers to deliver high-quality video, including 4K, 8K, and virtual reality (VR). But because adaptive streaming requires multi-resolution synchronized encoding, the delivery of next-generation video codecs has become even more challenging.

The impact of resolution

But it's not just codecs that are driving the development of more efficient video technologies. In many ways, video resolution has much more impact than codec complexity. The current broadcasting market is split between high definition (HD) video as the current state-of-the-art resolution standard, and ultra-high definition (UHD-1 or 4K), which is considered to be the upcoming format in developed countries.

UHD-1 is about 18 times as complex as HD, but the technologies required to cope with UHD-2 (8K) broadcasting and VR could be as much as 384 times as complex as HD.

Realistically, UHD-2 is not a good fit yet in terms of processing power requirements. TV and cinema industries will have to wait for several semiconductor cycles (of 18 months) to bridge the computing gap and bring UHD-2 into the realm of being a more affordable technology.

Cloud computation and encoding requirements

To encode ABR to UHD-1 HEVC, tight synchronization is required to get all the different resolutions IDR aligned. This will enable seamless switching on the player side.

From a practical standpoint, it is possible to encode UHD-1 using AV1. At the 2019 NAB Show, Harmonic showed Intel SVTAV1-based live encoding at 1080p 60 running in real time on a dual Xeon Skylake processor.

For a UHD-1 encoder, the video needs to be split into 10 tiles, each being mapped into one node. This creates some overhead in computation and the compression efficiency side, estimated to be about 15% versus single-frame encoding.

Though UHD-1 AV1 encoding is possible it is not yet practical for live applications. For VOD, it will take about 10 hours to encode one hour in the SBR mode and 30 hours in the ABR mode.

The benefits of cloud computing in broadcasting

The complexity of live applications is unmanageable with the current central processing unit (CPU) or graphics processing unit (GPU) on-premises technologies. Computational-hungry codecs, such as AV1, require a twenty-fold increase in computation requirements versus AVC, which is considered a legacy technology today. Adaptive bitrate (ABR) encoding requires three times as many pixels to process than a single bitrate (SBR) scheme.

For this reason, the key to improving encoding speed may well be a cloud-based solution, either in part or in full.

By moving the encoding effort to the cloud, encoding can be spread across different nodes as required. Existing appliance-based and software-based encoding can still be used for base-level requirements, with cloud-based encoding resources available to handle peak-loads when needed.

Using content-aware encoding (CAE), as specified by the Ultra HD Forum, an additional 40% bandwidth saving can be realized over a wide range of content. These savings were confirmed by several companies during an Ultra HD Forum regatta sequence using both live video (encoded by Harmonic) and VOD content (encoded by Beamr and Brightcove) during public demonstrations of CAE at IBC 2018.

Elastic encoding

Cloud encoding can be used to adapt processing to the complexity of the business needs. If a broadcaster needs better encoding during a primetime event, they can access more cloud-based central processing unit (CPU) resources in the node for a specific channel. Or, even more drastically, more nodes could be allocated to encode the channel.

This is known as a "cycle stat mux (CSM) method three," where, during the encoding process of multiple channels on the same node, the encoding of the most challenging content will preempt more cycles, with lower complexity channels using reduced quality requiring fewer cycles. This process is, of course, dynamic and changes in function based on evolving content complexity.

Moving forward

There are limits to using appliance-based solutions for UHD-1 in HEVC and even more for UHD-1 and UHD-2 using next-generation codecs. Cloud encoding delivers higher quality for UHD-1 and UHD-2 naturally, as more resources can be dedicated per profile. UHD-1 is economical in a Cloud environment as it is deployed already, whereas UHD-2 requires 48 times more CPU resources.

While UHD-2 HEVC doesn't offer the 50% saving of VVC, it only increases the complexity by a factor of 8 to 16. This is not just more economical but provides interoperability with deployed 8K TV sets. In addition, it can also enable the use of new and promising techniques such as CAE and CSM, which will, respectively, reduce the bitrate and increase the video quality.

But no matter what new codecs are developed or what resolution a broadcaster uses, it's clear that distributed encoding using cloud services will become increasingly common across all forms of broadcasting.

Tag(s): Featured , Cloud , News

SMPTE Content

Related Posts