<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

Using open cloud infrastructure for live production

November 17, 2020

As cloud computing and cloud services become ubiquitous for business, it's no surprise that broadcasters are also looking to use the cloud for live production. However, current cloud connections are not up to the quality standards required for low latency live streaming media. Therefore, research into quality-enhancing technologies, such as retransmission or Automatic Repeat reQuest (ARQ), is crucial to improving the network infrastructure required to deliver broadcast-quality transmissions. 

The potential use of the cloud in live production

Historically, broadcasters allocated resources for both production and networking based on peak utilization. This was financially challenging because it left most equipment unused for considerable periods. 

 

On-premises resources can be allocated based on average use, leaving cloud resources for peak usage, providing a more elastic cost model. The advantages of using cloud services are plentiful with a more flexible-cost model and a higher degree of reusability and resilience.

Cloud resources

While broadcasters can invest in their own cloud infrastructure, with the rise of public cloud environments, it makes more sense for them to use services such as Amazon Web Services (AWS) or Microsoft Azure Media Services. However, the use of public cloud environments come with their own set of challenges. For example, with public cloud environments, a video stream often traverses several hops outside of the data center and out of the provider’s operational responsibility. 

Cloud ingesting of live video

Getting content to the cloud may sound simple. After all, most broadcasters have business connections to the internet, connecting to massive internet core routers, which are generally overprovisioned in nature. But, much like the challenges that SMPTE ST 2110 is solving for wide-area networks (WANs), the problem is not necessarily capacity. It is loss, jitter and latency where even the slightest percentage point leads to massive visual impact.

 Consequently, major cloud providers are now introducing support for retransmission protocols to carry video into their media cloud services. This also goes hand in hand with a reduction in costs for bandwidth. Price points are finally coming down to where it starts to make sense for large-scale distribution, in addition to just processing and contribution.

Low latency streaming

Two fundamental properties are specifically applicable to retransmission technology and video compression.

  • Loss Sensitivity: Mostly related to the embedded compression codec properties, loss can be challenging for streaming media. In MPEG files, for example, losing an I-Frame means the worst case of several seconds of outage depending on the length of a group of pictures (GOPs).
  • Latency Sensitivity: While this depends on the application itself, latency could end up being a deal-breaker for using retransmission technology. But in most cases, the codec delay is the large overshadowing factor for the end-to-end chain.

Types of open platforms

Two solutions have emerged as front runners on the quest for the next de facto standard for broadcast retransmission: reliable Internet Stream Transport (RIST) and Secure Reliable Transport (SRT). To explore these technologies, engineers at Net Insight in Sweden undertook a series of studies to test each to determine which is the most effective.

Comparison and testing

Each experiment consisted of running one protocol and measuring the residual drop rate at stepwise increasing line drop rates. The residual drop rate is defined in the context of the content stream transitioning over the protocol, as the number of unrecovered (permanently lost) packets in proportion to the total number of content packets. Each sample point was run four times, at 15 min each for consistency. 

 Two different available link bandwidths were investigated:

  • 10 Mb/s, corresponding to a 42.9% available overhead over the 7 Mb/s stream, and 
  • 8 Mb/s, corresponding to a 14.3 % available overhead. 

The retransmission buffer in each protocol was set to 200 ms, or four times the RTT.

Results

The engineers found that, while the reference implementation is both mature and field-proven, both RIST and SRT are close and, in some cases, even superior in performance, at least when looking at ARQ alone.

 Ultimately, for the most common applications, such as a 10 Mb/s high-efficiency video coding (HEVC) signal across the internet in a reasonably lossy network (<3%), most of these options will do an exceptional job delivering these services.

 What is evident from the benchmark is that where bandwidth is not the limiting factor, SRT may be the better option at the expense of higher protocol overhead. Equally, if bandwidth is indeed limiting, RIST could be seen as the better option. For high-end services, RIST also has a superior feature set in terms of redundancy aspects.

 RIST and SRT both have backing from a large number of vendors and broadcasters. But this is in a market that is already a decade in the making. Technically good alternatives today are likely to exist for many years while being augmented by these new challengers.

 This change is, in fact, already happening, with many large and established vendors adding support for RIST, SRT, or both, to their existing products to be able to bridge and exchange content with other domains.

Future plans

But that's not the end of the story. There are still many technologies that have yet to be implemented or tested.

  • Jitter characteristics: We can study the jitter characteristics that different implementations add. Ideally, the pattern of interpacket gaps on the sender's input could then be recreated exactly on the receiver's output.
  • End-to-end latency: Another exciting area to study is the full end-to-end latency where each protocol has an identically sized latency buffer.
  • Congestion control: We've only started looking at the limits regarding congestion control, which should be explored further.
  • Quality of experience: Studying the quality of experience instead of pure packet drop statistics is also required. Consumers have come to accept lossy audio compression such as MP3 without feeling they are missing anything. The acceptable limits of cloud-based broadcasting could also be explored.
  • High-bandwidth ARQ transport: As the performance of cloud services improves, similar benchmarks with much greater bandwidths, such as a 3 Gb/s JPEG-XS compressed ultrahigh-definition (UHD) signal, would be of interest. Also, as bandwidths grow well into the multiple of 10 Gb/s range, there are also performance aspects for the platform itself. For example, from a CPU/memory perspective, is there a tradeoff between ARQ and FEC for pure execution purposes at there high speeds? Only time will tell.
Tag(s): Cloud , News

SMPTE Content

Related Posts