Hot Button Discussion
Compression Down the Chain
By Michael Goldman
The topic of compression has never been a sexy one, but inside the increasingly complex production ecosystem that serves the modern, multi-platform, high-resolution broadcasting industry, few subjects are more important. That’s the view of Jean-Baptiste Lorent, product and marketing manager of Belgium-based IntoPIX, an independent image technology company deeply involved in various initiatives related to new and evolving compression schemes. As Lorent points out, with more data and larger files involved with capturing, processing, and transmitting broadcast image files, “we have a lot of challenges to solve in the full chain—all the way from initial image capture to the final display of those images, plus storage, bandwidth, and interface issues.”
In some categories, reliable codecs are firmly in place to handle compression-related needs attached to certain links in the chain, such as the ITU’s H.265 [HEVC] standard designed to significantly reduce data bit rates to just a few megabits for the purpose of saving bandwidth, while delivering 4K images with minimal loss of data or artifacts visible on the viewer’s monitor. For specific applications, various mezzanine compression schemes have been serving the industry well for some time. Lorent emphasizes, however, that when moving to user-case requirements for compression in the category of the live production workflow, in particular—an area growing in importance daily—and with even more pressing potential needs looming, it is important to think of compression as something that needs to be contemplated from the aforementioned “full chain” point of view.
“For the full chain, from acquisition to the display itself, what we are doing in terms of image processing is pretty amazing,” he says. Lorent explains that in the camera, de-Bayering creates three times more pixels than what is acquired by the sensor. There, frame buffer compression may already be used. For connectivity, compression is sometimes used to get to a 1-GbE or a 10-GbE network, depending on the video format and infrastructure. The next step is editing, which also requires compression [for editors to be able to work efficiently using proxy images]. The broadcaster will eventually use hard compression, like HEVC, sometimes down-converting to 4:2:0 or 8-bit, to get content to a device today, whether a set-top box, phone, tablet, or smartTV. [The device] will also need de-compression. And in the future, it might be necessary to drive 8K images from the set-top box to the TV. This may also require compression, because the bandwidth will be so high that it will consume too much power, or the cable costs will be too high. In the TV itself, because of larger resolution and higher frame rates, we will need to reduce the complexity of the frame buffer. This also means the use of more scalers, because often, content won’t be in the TV’s native resolution. “So it all depends on the use case scenarios—what problems are we trying to solve?” he says.
Lorent emphasizes that the goal “is to be able to optimize the visual experience on multiple devices, on the whole chain, to reduce the complexity, the cost of implementation, and the power consumption. For all those stages, you need compression.”
While he expounds that an uncompressed signal is always best in a perfect world, the ultra-high-resolution broadcast realm simply makes that less and less of an option at most steps along the way. Therefore, “some new applications in the market will now require very low compression to replace the idea of uncompressed,” Lorent states. “This is a new trend in the production workflow. We are speaking more about lightweight compression [now] than ever before.” There is no clear definition for that term, he says. The best definition is simply the closest you can get to uncompressed in the sense that lightweight compression means very low latency, like uncompressed, and very high quality, similar to uncompressed. Additionally, complexities are very few, like uncompressed. “You do not want to add too much complexity into the workflow with lightweight compression,” he says.
Lorent points to frame buffer compression as an example of this trend now that high-resolution cameras are becoming ubiquitous. “If you designed a camera in the past for 60p HD, and then suddenly we moved to 120p HD, one way to keep consuming the same power on the battery would be to add frame buffer compression with 2:1 compression,” he says. “That would keep the same amount of embedded memory within the camera. This is a simple example, but you can apply the same idea if you move from HD to 4K or 4K to 8K in the camera—you would just have a factor of 2:1 or 4:1 in the data rate. That is the kind of value that can be extracted from lightweight compression, while still keeping very low latency, just a few lines, and very low complexity while avoiding consuming too much power. That is just a small example for internal memory solutions.”
Lorent says lightweight compression schemes are also essential for other applications, including realtime interface solutions for SDI and IP connectivity. But the live production workflow arena is where “we are seeing some interesting activities right now,” he says. “We are looking to replace SDI eventually. We are looking for a codec that brings the same values as uncompressed—low latency; very few lines of pixels; visually lossless quality; able to support 4:2:2, 4:4:4, and 8-bit, 10-bit, and even 12-bit, yet also compliant with [high dynamic range]. So we need the same value that we can bring into the broad production workflow.”
Lorent points out that, with today’s industry landscape, “the most deployed and implemented compression technology overall for ultra-low-latency” is wavelet-based JPEG 2000. He adds that JPEG 2000 is not lightweight compression and is fairly complex. It was originally designed for grueling applications, such as digital cinema, archiving, storage, and contribution, among other things. In recent years, new profiles of JPEG 2000 have been developed, including a low-latency mezzanine compression version that has been widely used for live studio over IP applications in the broadcast world. However, that profile is not ideal lightweight compression, because it targets high compression ratios, up to as much as 20:1, according to Lorent. This still makes JPEG 2000 useful for remote production in the UHD world, where requirements would typically necessitate up to 10:1 to 20:1 compression, due to lower bandwidth requirements and considerations outside the studio, he says.
“Obviously, the more bandwidth you try to reduce, the higher coding complexity you have in the algorithm,” Lorent says. “JPEG 2000 is complex, compared to what could be needed in the studio, but far less complex than HEVC. HEVC is far more complex in its silicon footprint, external DDR memory and latency, but brings significantly higher compression with it.”
The industry has developed the TICO lightweight compression scheme as a wavelet-based, lightweight mezzanine compression scheme for such applications—an initiative Lorent’s company, IntoPIX, and others have been intimately involved with through the TICO Alliance. He says TICO was essentially an initiative “to address a lightweight implementation of JPEG 2000, and now [some] manufacturers have adopted it as they moved from SDI to IP. Thus, [broadcasters] can carry HD signals over 1-GbE, or 4K over 10-GbE, or even 4K over existing 3G SDI cable. We wanted a codec that could have low latency like JPEG 2000, but also be more lightweight, and run fast in software in a CPU in a [virtual production environment].”
In 2015, TICO was submitted as a SMPTE Registered Discloser Document (RDD-35), and JPEG 2000 and TICO are now widely deployed across the industry. Lorent points out, however, that there are others, including the BBC’s VC-2 low-delay video codec, which has been defined as SMPTE ST-2042, and Sony’s proprietary network media protocol, called Sony LLVC (low-latency video codec)—a video over IP interface, which is part of Sony’s interoperable IP live-production initiative and has been approved as an RDD document (RDD-34). Also, in late 2015, NewTek announced an open protocol for IP production called Network Device Interface (NDI) that includes a compression scheme, as well.
The newest development, according to Lorent, is the JPEG-XS initiative, emanating out of the JPEG Committee, with ‘XS’ standing for “eXtra Speed.” He says the idea behind JPEG-XS is that “it is intended for live production workflows, but with low-latency and lightweight compression.”
“They wanted a mezzanine compression codec that could run fast in the field without [overly taxing bandwidth], essentially for broadcast and pro-AV applications,” he explains. “Right now, they are just at the starting block, having recently made a call for technology to begin the standardization process.”
In particular, initiatives like these are important because broadcasters are transitioning to an “eventual virtual model” in terms of broadcast plants, Lorent reminds. This means, “the broadcast studio will become more of a data center,” he says. Whether or not the industry will coalesce around a single compression scheme for such facilities is not something Lorent expects to see any time soon, largely because every facility and institution will be making such transitions at their own pace, with different resources and agendas.
“The compression scheme they might choose depends on the speed of a facility’s network,” he says. “So not everybody will be able to put an infrastructure together for 10-GbE, and then tomorrow for 25-GbE or 40-GbE or 100-GbE. The adoption will occur at different times for each facility. If you started today, you might build your facility to pick up a 10-GbE network infrastructure. If you have 10-GbE, you will probably be able to do HD uncompressed everywhere. But when you start to address formats like 4K or 8K, or higher frame rates, you will probably need lightweight compression.”
By contrast, for remote production outside a studio, Lorent points out that “a 1-GbE network will probably be the most affordable approach, reducing payment to a Telco operator to get that bandwidth. As the cost of the bandwidth is much higher than what it really costs within your facility, you will look at a compression scheme that can keep the latency low, but still address a compression ratio more around 10:1 to 20:1. That is exactly what the ultra-low-latency profile of JPEG 2000 can do. For that, today, we are seeing adoption of TICO within such facilities, because you have 10-GbE or 3G SDI [in the facility]. If you have remote productions outside the facility, you will need 1-GbE, 4K abilities eventually, or, today, you may need HD at 200 Mbits/sec or multiple HD channels to carry below the 1-GbE network. In such cases, both low latency and lossless quality will be critical. Thus you will need a higher compression ratio, and might decide to use JPEG 2000, even if it has a higher complexity cost. So in that sense, the cost of JPEG 2000 would be nothing compared to the benefit it would bring to the application when you are outside the studio in a contribution link. It would all just depend on your particular needs.”
In any case, as the industry moves forward to maturing and implementing these schemes and future ones, Lorent states that in addition to light compression capabilities, a good compression codec should offer good error resilience, and typically, JPEG 2000 and TICO behave very well in that regard. “You don’t want to end up with much inter-frame compression. If you lose some information on a key frame, you can end up with big artifacts,” he says.
“Another big concern when using compression in production is to keep a visually lossless quality after multiple encoding and decoding generations. You do not want to lose quality at each encoding, so it is very important to be robust for multiple encodings/decodings.”
Possibly most important, Lorent says, the industry is seeking interoperability in new compression schemes. He emphasizes that organized efforts and standards bodies and manufacturers are working hard to make sure there is some form of synergy in this area, particularly where IP transport in studios, and mezzanine compression generally are concerned. SMPTE, the JPEG Committee, the Video Services Forum (VSF), the ITU’s Joint Collaborative Team on Video Coding (JCT-VC), the Video Electronics Standards Association (VESA), and others are “definitely on top of these issues,” he says.
“They are all doing a great job, with many complementaries. The VSF has important activities going on right now for contribution links and remote productions outside the studio. They are working on the existing VSF TR-01, which is JPEG 2000, encapsulating over SMPTE ST 2022-12, and on that work, VSF TR-01 is now being extended. It was originally built for HD, but is now being extended for 4K and 8K, and also to address ultra-low-latency with JPEG 2000 for remote production. The SMPTE 10E Group worked on several compression specifications that were either made as an RDD or as a standard, and JPEG—with several liaisons in place with VSF and SMPTE—is working hard on JPEG 2000 profiles for ultra-low-latency, IMF, the next generation of digital cinema decoders, and of course, for JPEG-XS lightweight low-latency coding.”
Consumers Notice 4K, IP, HDR
Some recent business reports by Deborah McAdams on the TV Technology site illustrate the rapid influence that new broadcast technology improvements are having on consumer trends, even in less than perfect economic times. The first report states that adding high dynamic range features to 4K TVs is having an immediate payoff for manufacturers. According to McAdams’ article, a mid-year study by the Consumer Technology Association (CTA) suggests that 2016 will end up as “a flagship year for 4K UHDTV’s, driven in part by the market introduction of next-generation technologies like HDR.” The CTA report suggests that shipments of 4K UHD TVs could reach 15-million units, which represents a 105 percent increase and revenue of almost $13-billion. Meanwhile, another piece from McAdams says that a report titled “Broadcasting Equipment Market: Asia Pacific Industry Analysis and Forecast, 2016-2024” indicates that the broadcasting equipment market in the Asia-Pacific region is expected to double within nine years, as a direct result of 4K over IP broadcasting infiltrating the Asia-Pacific Affairs Council (APAC) market. The report suggests that, over the next few years, 4K services will become widely available on IP networks in the region.
VCR Officially Dead
A revolutionary technology passed away at the end of July when the the Funai Corporation of Japan stopped making VCRs. A recent feature story in the New York Times reported on the Funai announcement about this development, and reviewed the importance of the VCR and how it fundamentally changed the world of consumer entertainment since the Ampex Company introduced its first “practical videotape recorder” in 1956, according to the article, which also details the first demos of the Ampex recorder and the reactions of the executives in the room at the time. The article states that the first consumer VCRs followed in 1960 and became a must-have technology for households by the 1980s before the advent of DVDs brought about the technology’s eventual demise. It also states that Funai officials declared they sold 750,000 VCRs in 2015, believe it or not, but that they are now having “difficulty finding parts.”
A recent story in Wired details the development and launch in June of a potentially important new concept in Internet communications—Facebook’s Aquila flying drone. According to the article, Aquila was designed by Facebook for the express purpose of “providing Internet access in remote parts of the world.” It will eventually do so as a carbon-fiber, 140-ft drone that flies over such regions on autopilot, equipped with solar panels, high-altitude batteries, Internet antennas, and a range of other technologies that will make it possible, according to the article, for Aquila to beam a wireless Internet signal to people who currently have no Web access. The June test followed previous tests by Facebook, Google, and others with smaller prototypes and high-altitude balloons. It lacked the web-broadcasting gear it will eventually carry, did not achieve the heights Facebook officials claim it will eventually attain, and it only flew for about 96 minutes. But, even with a variety of issues to address before such solutions become practical, the article suggests Facebook, Google, and other Web giants will keep working until all the issues are addressed. Eventually, they might even give away blueprints and cooperate with governments and Internet service providers on launching their own Web drones for the simple reason that doing so will greatly extend the range of their own Internet platforms.
SMPTE Centennial Update
As SMPTE prepares to culminate its Centennial celebration at the Centennial Gala, the Society expresses gratitude to hard-working and dedicated Gala Committee Members: Naida Albright, Wendy Aylsworth, Dan Burnett, Angelo D’Alessio, Pat Griffis, Charlie Jablonski, Patricia Keighley, Pete Ludé, and Peter Wharton. Plans are under way for a spectacular event, and all are welcomed to join!
Over the course of the past few months, SMPTE has been engaged in various initiatives to honor the past and present and look confidently and enthusiastically to the future. These initiatives include the establishment of a SMPTE Centennial website, the production of an oral history and video documentary, the writing and editing of a commemorative book as well as a chronicle highlighting Honorary Members, and the integration of a SMPTE historical exhibit as part of the SMPTE 2016 Annual Technical Conference & Exhibition. Fundraising activities to support The Next Century Fund and invest in our three pillars of Standards, Education and Membership continue, and we are grateful to all those who have made generous contributions to support that effort.
Throughout a long and illustrious history, the Society has remained the leader in the development of standards for the arts, sciences and motion imaging industry. Credit is attributed to the thousands of members that have made reaching this important milestone possible. Thank you to all those who have provided the innovative thinking, leadership, and knowledge to keep SMPTE strong and relevant during some challenging times and constantly evolving landscapes.
For more information about the Centennial, please contact Mary Vinton, Director of Philanthropy.