Hot Button Discussion
Better Dynamic Range All Around
By Michael Goldman
With terms like ultra-high-definition (UHD), 4K, and “next-generation television” flying furiously through broadcast industry conversations these days, an obvious question is often overlooked: What is the core image improvement that consumers are most likely to notice, pay for, and at the end of the day, care about? Generically, many people say that “better resolution” is what next-generation television is all about. But, as Lars Borg, principal scientist in Adobe’s Digital Video and Audio Engineering Group, emphasizes, that’s a subtle concept that depends on a host of factors related not only to how content is created and mastered, but also to how it is transmitted and viewed, on what device, in what room, and under what conditions. So while UHD has evolved into an umbrella term for the all-round, improved television viewing experience these days, Borg and many others suggest that the issue of adding more vivid and realistic color characteristics to the content that consumers view, and upgrading the dynamic range capabilities of their viewing infrastructure and the content itself, are the improvements that consumers are most likely to perceive and value.
The industry, of course, has addressed this issue with the 2012 color management recommendation by the International Telecommunications Union (ITU), known as BT.2020, which essentially widens the color space to allow for more saturated and life-like colors and uses more bits per sample. However, as Borg points out, that development has happened at a time in history when television viewers are experiencing content in a wider range of formats and platforms than ever before, in all the different flavors of standard definition, HD, and UHD. Thus, he says the industry still faces the issue of how to convert older content from HD to UHD with the same quality it has long provided when converting between standard-definition content and HD. Because of this ongoing conversion requirement, where older content can be seamlessly integrated with UHD material and vice versa with the luminance and other color characteristics its creators intended, Borg says the industry needs to ensure a consistent experience across the board, and not just as it relates to brand new content and devices.
“UHD offers a wider color gamut and more saturated colors, but the dynamic range is no different than HD,” Borg says. “I presented a paper at the [SMPTE 2014 Annual Technical Conference] about the challenges of doing HD to Ultra HD conversions, because the wider color space is a problem—the color volumes do not match. We need a new awareness regarding the conversions that will have to be done for the foreseeable future. Many U.S. households still do not have HD televisions, some channels are not in HD yet, and many viewers pay a premium for HD channels. Given that fact, and that the HD standard is 20 years old, even though we are already [selling UHD televisions to consumers], I have to assume that for many more years, we will still be distributing SD versions, HD versions, and higher dynamic range versions of the same program, in parallel, and that will be painful for everyone involved. So we need to nail down how best to convert this content to [the wider color space] needed for viewing [on UHD televisions].”
In his SMPTE Technical Conference presentation, Borg proposed that the ITU’s BT.1886 flat-panel display standard recommendation is the proper standard to use when converting between television color volumes, such as HD to UHD. He says this provides more accurate color conversions of HD material to UHD formats so that the material can be properly displayed and observed by viewers without further intervention of a colorist to obtain the desired color match.
“Until now, everything has been in the Rec. 709 color space for television conversion boxes because SD to HD are all in the same color volume,” Borg explains. “Therefore, a lot of engineers, including me for a long time, didn’t realize that the Rec. 709 tone curve is really only for the camera—actually a fictitious camera at that—and does not apply after the camera has done its job. After that, there is no appropriate use for that tone curve. It was OK to convert in HD and SD using that tone curve in the past, because the color volumes and encodings were the same for HD and SD. Now that we are getting to UHD, using the right tone curve becomes critical when going from a narrow gamut to a wider gamut. For the longest time, people only knew about the Rec. 709 tone curve, so that was the one they used. This doesn’t work any longer.
In 2011, as CRTs were gone, and providers wanted to be able to view things the way they looked when we had CRTs, BT.1886 was published. BT.1886 is the reference to let you know how displays behaved when they were in CRT mode. As we started doing conversions between different color volumes, and needed to match displays of different volumes, it was recognized that the Rec. 709 tone curve was not applicable, and that BT.1886 was the better choice. If we decode such content with the camera curve [Rec. 709] instead of the display curve [BT.1886], then the decoded content can look washed out or discolored. This was a big reason why I made my presentation at the SMPTE Annual Technical Conference—to share the knowledge of making displays match when we do color conversions with new color volumes.”
Beyond Borg’s belief that BT.1886 should be used with SMPTE’s HDR mastering reference display standard, ST 2084:2014, however, lies the reality that the multiformat/multiplatform universe has made the versioning process a necessary, but often laborious, evil for content creators. And so, Borg reminds us that the industry desires to streamline that process, including managing colorimetry for different platforms and formats. Moreover, a key to making that happen lies in the world of metadata—another initiative he is intimately involved with as Chairman of SMPTE’s 10E Dynamic Metadata Drafting Group on the subject of Color Transforms of HDR and WCG [White Color Gamut] Images.
That project’s goal is to ensure that content captured and mastered specifically for HDR television carry with it the metadata necessary to ensure the content’s original creative intent is maintained when the content is fitted to other color volumes for different formats, displays, and distribution platforms.
“This project is very interesting for color conversions, because if you do one conversion where [color] is controlled by a colorist, he can add metadata from that conversion so that you can put that metadata into your HDR master, which you can then edit later, replacing scenes, cutting scenes, or cutting frames to change timing and so on,” he says. “And as you do that, you still have your HDR to HD grade embedded in there. So, from a management point of view, you need only one master. You might use [the Interoperable Master Format, or IMF] to create multiple versions of it, but you still have only one master to deal with as a single item, which makes everything so much simpler. So one goal with the Dynamic Metadata Project is to reduce the number of masters being circulated for a production. Carrying the dynamic metadata from the HDR version to the HD conversion or other television versions will simplify the versioning pipeline because you will not always have to color grade after you have graded the HDR version. You can color grade once and have that grade carry with you after you make all the different HD versions. It would also be a benefit for archiving—you would only need to archive one version.”
Borg hopes the Dynamic Metadata standard will be ready for publication by the end of 2015, but whenever it is ready, his main point is that strict metadata consistency holds the key to improved HDR and color gamut qualities in content as it travels from its original, pristine, master birthplace to its various broadcast destinations via all sorts of ever-expanding platforms. And, eventually, such a protocol can be future-proofed to work with future broadcast formats, such as UHD-2 and, eventually, 8K and beyond, theoretically.
“The framework we are putting into place in this project should not be limited to any specific broadcast or television format,” Borg says. “2K, 4K, 8K, 16K, should you want to go there, fine. Right now, we have 8-bit, 10-bit, 12-bit encoding, but if you want to have floating-bit encoding or 16-bit, you should be able to do that too. It should be very flexible for all kinds of mastered content, and not tied to any specific content standard. You can also convert to any possible output format, as well. It would be interesting to complement it later on with, for example, pan-and-scan if you go from 8K down to 2K, and you wanted to have pan-and-scan instead of just scaling an entire image—we could accommodate that, as well, in the same framework. The concept seems future-oriented and should not run out of steam anytime in the foreseeable future.”
Another interesting outgrowth of this larger initiative, according to Borg, is the fact that the drive to reduce the number of masters and color grade and finish content for the best HDR scenario possible actually has the potential to bring color-grading and other mastering tools for UHD broadcast content closer to the world of cinema, as that market makes its own transition in this area, thanks to the recent launch of the Academy Color Encoding System (ACES), another project Borg has been involved with in recent years. That initiative, of course, is designed, philosophically, to accomplish the same type of goal in the filmmaking world as the Dynamic Metadata project hopes to help achieve in the broadcast world—to preserve the creative intent of content creators within metadata that content carries to every format and platform.
“ACES aims to change the workflow for cinematic content, but it also brings [filmmaking] closer to what we need for HDR TV,” Borg suggests. “So we could see some synergy between the ACES system that primarily targets cinema, and production systems that target television. Whether a movie is made for TV or made for the big screen, that line is getting blurred. So whether you are color grading for cinema or color grading for HDR TV, the grading experience is going to be similar.”
Hollywood's HDR Plans
Speaking of higher dynamic range, the business of HDR and how best Hollywood studios can move forward with implementing it for movie titles on the large and small screens was a major topic of discussion at NAB 2015 this month. In NAB coverage from The Hollywood Reporter, 20th Century Fox CTO Hanno Basse, president of the new studio and manufacturer industry coalition known as the UHD Alliance, said during an NAB panel that he fully expects Fox, and the rest of the industry, to implement HDR color-graded versions of all their new films “pretty quickly.” The article also quotes Basse as saying that the creation of an interoperable HDR standard for the motion-picture industry is realistic by later this year, in time to coincide with when the Blu-ray Disc Association is expected to introduce UHD Blu-rays with HDR support. A more complicated issue, according to the article, is how to monetize HDR through existing studio film libraries, because studios will need to decide how far into their catalogues to go, in terms of converting and grading older content to optimize it for UHD/HDR presentations. One studio official suggests in the article that a good place to start would be with every movie that is currently already scheduled for re-mastering, but many questions remain in this area.
Moore's Law Turns 50
A recent article in the San Jose Mercury News examined the importance of the 50th anniversary of the publication of an article in Electronics Magazine on April 19, 1965, written by a then unknown Silicon Valley engineer named Gordon Moore, in which he postulated that computer chips would double in capacity every year over the course of the next decade at little or no cost to the people making them—a prediction he amended 10 years later to suggest that chip capacity would continue doubling at least every two years indefinitely. What seemed to be nothing more than one man’s opinion at the time turned into a hardcore axiom that, as the article points out, essentially dominates technology development cycles in Silicon Valley to this day—an axiom that was soon named “Moore’s Law” by a Caltech professor. The article analyzes the relevance of Moore’s Law to the technology sector for the last half century, and features comments from the now retired, 86-year-old Moore, who went on to co-found Intel. Among other things, the article points out that the prediction’s accuracy forced companies to keep pace with it or risk losing market share, thus holding down the costs of consumer electronics’ products. It also points out that revolutionary medicines and many other types of breakthroughs were made possible by constantly evolving computer chips, which were necessary to model and design those breakthroughs to begin with. One expert quoted in the article, however, does suggest that, eventually, the doubling of chip power will likely slow down from every two years to, more likely, every four or five years.
At NAB’s Technology Summit on Cinema, which SMPTE co-produces in partnership with NAB, SMPTE announced a new campaign designed to increase public awareness about the organization’s contributions to the media and entertainment industry. The campaign, called “Life Without SMPTE,” debuted at the NAB event with a video short demonstrating the importance that SMPTE Time Code plays in the media world. That video was designed to be the first in a series of short clips that will be released periodically as part of the campaign at a special Website. The initiative, over time, will attempt to promote and explain the benefits of SMPTE standards to the creators and users of everyday media. A special hashtag--#LIFEWITHOUTSMPTE—is also part of the campaign. It is being used as a call to action to encourage media professionals to share photos, video clips, and social media posts illustrating what their lives would be like without SMPTE standards.