Access information about your local section
Access information about your student chapter
See How it Works
Learn why technologists and engineers join
Learn why technologists and engineers join
View the companies that support us
The following jobs are brought to you by Indeed
Your source for member events and promotions
Learn more about SMPTE standards
SMPTE engineering documents and others
Get involved in creating next generation standards
Online and in-person meeting schedule
Move to the Knowledge Network (SKN)
Operations manual and guidelines
Keeping the Industry Moving Forward
Your gateway to the latest digital media technology
Stay on top of what's happening in the digital media industry,
Instructor-Led or Self-Study
Explore Our Latest Content
As cloud computing and cloud services become ubiquitous for business, it's no surprise that broadcasters are also looking to use the cloud for live production. However, current cloud connections are not up to the quality standards required for low latency live streaming media. Therefore, research into quality-enhancing technologies, such as retransmission or Automatic Repeat reQuest (ARQ), is crucial to improving the network infrastructure required to deliver broadcast-quality transmissions.
Historically, broadcasters allocated resources for both production and networking based on peak utilization. This was financially challenging because it left most equipment unused for considerable periods.
On-premises resources can be allocated based on average use, leaving cloud resources for peak usage, providing a more elastic cost model. The advantages of using cloud services are plentiful with a more flexible-cost model and a higher degree of reusability and resilience.
While broadcasters can invest in their own cloud infrastructure, with the rise of public cloud environments, it makes more sense for them to use services such as Amazon Web Services (AWS) or Microsoft Azure Media Services. However, the use of public cloud environments come with their own set of challenges. For example, with public cloud environments, a video stream often traverses several hops outside of the data center and out of the provider’s operational responsibility.
Getting content to the cloud may sound simple. After all, most broadcasters have business connections to the internet, connecting to massive internet core routers, which are generally overprovisioned in nature. But, much like the challenges that SMPTE ST 2110 is solving for wide-area networks (WANs), the problem is not necessarily capacity. It is loss, jitter and latency where even the slightest percentage point leads to massive visual impact.
Consequently, major cloud providers are now introducing support for retransmission protocols to carry video into their media cloud services. This also goes hand in hand with a reduction in costs for bandwidth. Price points are finally coming down to where it starts to make sense for large-scale distribution, in addition to just processing and contribution.
Two fundamental properties are specifically applicable to retransmission technology and video compression.
Two solutions have emerged as front runners on the quest for the next de facto standard for broadcast retransmission: reliable Internet Stream Transport (RIST) and Secure Reliable Transport (SRT). To explore these technologies, engineers at Net Insight in Sweden undertook a series of studies to test each to determine which is the most effective.
Each experiment consisted of running one protocol and measuring the residual drop rate at stepwise increasing line drop rates. The residual drop rate is defined in the context of the content stream transitioning over the protocol, as the number of unrecovered (permanently lost) packets in proportion to the total number of content packets. Each sample point was run four times, at 15 min each for consistency.
Two different available link bandwidths were investigated:
The retransmission buffer in each protocol was set to 200 ms, or four times the RTT.
The engineers found that, while the reference implementation is both mature and field-proven, both RIST and SRT are close and, in some cases, even superior in performance, at least when looking at ARQ alone.
Ultimately, for the most common applications, such as a 10 Mb/s high-efficiency video coding (HEVC) signal across the internet in a reasonably lossy network (<3%), most of these options will do an exceptional job delivering these services.
What is evident from the benchmark is that where bandwidth is not the limiting factor, SRT may be the better option at the expense of higher protocol overhead. Equally, if bandwidth is indeed limiting, RIST could be seen as the better option. For high-end services, RIST also has a superior feature set in terms of redundancy aspects.
RIST and SRT both have backing from a large number of vendors and broadcasters. But this is in a market that is already a decade in the making. Technically good alternatives today are likely to exist for many years while being augmented by these new challengers.
This change is, in fact, already happening, with many large and established vendors adding support for RIST, SRT, or both, to their existing products to be able to bridge and exchange content with other domains.
But that's not the end of the story. There are still many technologies that have yet to be implemented or tested.