SMPTE Newswatch Masthead

Hot Button Discussion

Why Dynamic Metadata Matters
By Michael Goldman  

In the year and a half since he last spoke with SMPTE Newswatch about efforts to standardize ways to manage and convert content color information while mastering high dynamic range (HDR) content for a wide range of platforms and formats, Lars Borg, principal scientist in Adobe’s Digital Video and Audio Engineering Group and Chair of one important SMPTE workgroup in this arena, suggests the ball has not only moved forward, but also far and wide, particularly where the topic of dynamic metadata is concerned. Of course, he concedes, for many people, this is a difficult issue to understand in the sense that they don’t exactly know how to define dynamic metadata.

“Dynamic metadata is a new concept because it is something we didn’t need to worry about in the past,” Borg explains. “If you think about regular HDTV in the past and what we call the color volume—what colors could be applied [to content], the chromacity, the brightness—that color volume for media was the same as the color volume for the display. The display could show any color [present] in the media, and the media could carry any color that could be [visible] on a display. There was a one-to-one mapping between the media encoding and the display, but that is no longer the case in the world of ultra-high-definition. With [BT.2020] wide color gamut, the BT.2020 media container can carry more saturated colors than can be displayed on the current set of wide color gamut displays. Current wide color displays can show colors up to the P3 color gamut, but that is still smaller than BT.2020 color gamut. So the media container is bigger than the display color volume, and if you add HDR to that, it is even more.

“For example, take the HDR-10 format or [Dolby Vision’s] PQ Curve, which go up to 10,000 nits and deep brightness. There are no [consumer] user displays available that go to 10,000 nits. The brightest displays will do between 1,000 and 2,000 at peak. Again, you have media containers with a brightness range that exceeds the capability of a display. This is a new problem—one we never had before. We are talking about a media container that can probably hold 10 to 30 times more colors than can be represented on the screen. Thus, we had to figure how to apply some methods—a way to do some processing to reduce them to fit the media color volume of each display."
 
“Dynamic metadata says ‘I’m not even using the full range of the mastering display.’ Instead, for this clip, map these colors onto the target display, and do it this way. When you are trying to put a big plug through a small hole, you need to know exactly what part of the big plug to preserve and what part to shave off." Borg states dynamic metadata determines what parts do not need to be used from the original mastering image, while still making sure the image produced is a good rendering of the original. "It is how we optimize the preservation of the created intent from the master onto the target display," he says.

As part of SMPTE’s efforts to advance dynamic metadata standards, Borg serves as Chair of the SMPTE TC-10E DG regarding Dynamic Metadata for Color Transforms of HDR and WCG [Wide Color Gamut] Images and TC-31FS for Dynamic Metadata for Color Volume Transformations and Encodings. He says that 10E addresses “the semantics and syntax” of the problem, or “what we call the essence” of dynamic metadata, while 31FS addresses its application specifically to file systems, particularly MXF.
 
The core dynamic metadata project launched by the 10E committee in 2014, ST-2094, is well on its way to having its final section published by the end of 2016 or shortly thereafter, according to Borg.
 
“That project is more about mastering than distribution,” he elaborates. “The initial project was to standardize contributions from Dolby, Phillips, and Technicolor. Their contributed technologies were already available in some limited form in the industry infrastructure, but had to be documented and developed into a standard with interoperability and with a richer feature set that was necessary for full deployment of a standard. Then, in 2015, Samsung joined the project with a contribution of the technology they developed. Our first document was published in July of 2016, and then other parts followed in August and September, and we still have the final one to publish, and we are in the final stages of that right now.

“It is a suite of documents covering common concepts and core components. One part is the common concepts used across all the different methods. Then we have four different application documents specifying four different dynamic metadata methods. The last method that we are in the process of completing, determines how to encode all this into MXF and KLV [Key Length Value as defined in SMPTE ST-336] files. It is the encoding format that is used in MXF files for encoding metadata.”

Borg states that this won’t be the end of the story—there will eventually be other SMPTE projects on dynamic metadata. Preliminary work is already being launched in areas he is not yet at liberty to detail beyond the fact that they will deal with “other parts of the infrastructure where you need to adopt dynamic metadata. It is more about adopting principles and details for carrying dynamic metadata in other areas besides MXF," he adds.

All this work, he points out, leads directly and indirectly to “organizations outside of SMPTE working on adopting dynamic metadata, as well, but all that is on their schedules, not ours, so I can’t really say what exactly they are doing or when. But it is fair to say that many people are picking up the dynamic metadata standard for inclusion in other kinds of distribution systems.”

He points to various industry developments, like the Colour Remapping Information (CRI) attribute developed by Technicolor, which eventually went into the ITU’s definition of H.265 (HEVC); ETSI TS-103433, a new standard for consumer electronic devices from the European Telecommunications Standards Institute; the Ultra HD Blu-ray (4K) specification’s reliance on HDR-10’s static metadata concept, and other steps, as “examples of how dynamic metadata is being enabled in various ways.” He expects that trend to continue to speed up, with the SMPTE standard increasingly referenced by other standards organizations now that it is on the cusp of final completion.

While this is all important to the technology and standards communities, Borg states, what is even more widely important is the fact that direct and indirect impacts are now being felt throughout the creative community. This, of course, is the purpose of standardizing new technological breakthroughs in the first place, he adds.
 
“We’ve seen strong interest from studios for dynamic metadata—actually from all producers of feature content, if you generalize that studios are more than just movie studios now that entities like Netflix are in the mix,” Borg relates. “Generally, content producers of features programs are growing interested. The reason, of course, is that dynamic metadata can be used both for packaged media, such as Ultra HD content, and for streaming content, and it is also suitable for versioning. If you have a program and you are going to produce versions for several types of output devices, different HDR displays or standard dynamic range displays, having dynamic metadata available will facilitate that versioning—you won’t have to create multiple masters. And in a production facility, you can generate different distribution masters with ease since you have the metadata available.”

Less certain, he suggests, is the extent to which the broadcast industry specifically will be interested in dynamic metadata, since it can be too complicated for their typical workflows. It would require upgrading [infrastructures] in many cases. Therefore, it is unclear how it will be adopted. It depends on the cost for their infrastructure, and how many end-user viewing devices would include support for dynamic metadata.”

Borg emphasizes that the consumer television industry remains, for the time being, locked in something of a hybrid universe, in which systems that support one kind of a format or another, but not all, are rolling out first to consumers. Therefore, the business logic of trying to manufacture and sell interoperable televisions to consumers right now remains suspect. Accordingly, the logic of upgrading broadcast plants also remains a low priority and/or not financially feasible for the time being. In other words, as with many other new developments, he suggests the rollout to the broadcast industry will likely happen piecemeal, rather than in a revolutionary sweep.
 
“Today, for example, we have TV’s that support HDR-10, and TV’s that support Dolby Vision available in the marketplace,” he says. “We can expect there will be TV’s that support dynamic metadata as defined by SMPTE, and when they arrive, that will be an advantage for streaming content. That capability could be leveraged, as could the ability to show packaged media. But it is unclear right now if it can be leveraged by all broadcasters, or only some broadcasters. They will need to have broadcast plants that can distribute [SD, HD, and Ultra HD] content. So will they be able to handle dynamic metadata? In some cases, they won’t for quite some time.

“They might have a program mixer, for instance, that uses uncompressed data and no metadata is even carried through it, and so, no metadata would go out. Some will, and some won’t. I don’t know if we would want to make [conversion of broadcast plants] a requirement, but it does seem likely it would be needed [for full adoption]. For many broadcasters, adding dynamic metadata to their pipeline might seem too costly for them. On the other hand, they are in a competitive situation. If [the viewer] can get the movie on Netflix, versus the local broadcaster, or some other streaming service that can deliver an HDR movie, and the broadcaster can’t provide it with metadata, but the streaming service can, it might appear that the movie has better quality with the streaming service. Then again, I have to pay to subscribe to the streaming service. So there may always be a quality differentiation in that respect. I can’t really say how it will work out for broadcasters.”
 
Borg will be leading a SMPTE Webcast on 12 January 17 that will discuss the SMPTE Dynamic Metadata standard. You can register for that session here.

News Briefs
Advanced Imaging in Space

A recent TV Technology interview with Rodney Grubbs, NASA Imagery Experts Program Manager, previewing an upcoming December 8 Government Video Expo panel discussion led by Grubbs, discussed NASA’s plans to utilize many advanced imaging technologies, including virtual reality tools, for upcoming, unmanned Mars, Moon, and asteroid exploration missions. In the interview, Grubbs makes the point that various combinations of VR, immersive 360 image capture, high dynamic range, and ultra-high-definition image capture tools can, and will be used by NASA to “solve technical challenges for acquiring imagery in space.” He says such tools are important for providing “a relevant archive for the future” of historical imagery from space, and to analyze and test rocket systems and how they perform in space. He adds that “motion imaging can provide situational awareness for ground controllers,” and can “document near misses” when various other kinds of sensors do not detect problems, such as when debris come near a probe. This type of technology is also valuable in helping in long-term study of environments that NASA may eventually want to use as possible landing spots for manned exploration. Finally, Grubb says NASA can largely rely on modified versions of commercially available tools for such applications, and in so doing, conduct valuable tests on technologies such as optical communication links and 360-degree field-of-view tracking software that will end up having valuable applications on Earth. 

Therapeutic VR
A recent Washington Post article examines how medical professionals are now incorporating virtual reality technology into therapy protocols for senior citizens grappling with dementia. The article focuses on a company called One Caring Team, started by a Bay Area physician named Sonya Kim, which has developed a tool for such patients called Aloha VR, which the company defines as an “immersive relaxation program” that assisted living facilities use to help “lonely seniors reconnect to life without pharmacological side effects.” The notion is to help the patients, in a controlled way, experience happier environments and activities than their actual life experience. This allows them to essentially forget chronic pain and anxiety for a time, while monitoring their physiological state and various other metrics to learn more about their ailment and how to positively influence their mental and emotional state. The article says Kim theorizes that stimulating patient’s  brains with “VR allows patients’ neural pathways to be reactivated.” A recent blog on the Nvidia website discusses the Aloha VR technology and other companies that are incorporating VR technology into healthcare in other ways.

Digital Actor Live on Stage
The Royal Shakespeare Company recently created a buzz in the theatrical world by introducing a novel, cutting-edge solution to the centuries-old problem of how best to physically stage the character of Ariel, a spirit-like character who takes on various forms during live performances of Shakespeare’s famous play, The Tempest. The theater company partnered with Intel and The Imaginarium Studios to transform Ariel into a completely digital character for an entire production—a solution the Imaginarium Studios developed using Xsens realtime motion-capture technology. A detailed article on the FXGuide website explains how this was done as the Ariel character busily morphs from a spirit to a water nymph to a harpy during the production. The basic concept involves capturing an actor’s movements backstage via motion-capture sensors inside his costume, using tracking software to track and then process that data and project it into a digital avatar strategically projected on stage in various ways that provide the illusion that the character is actually interacting directly with live actors. The article suggests the success of the Ariel character is “a huge development” in the theatrical world because it means that the creation of digital characters in realtime on a large scale can now be accomplished seamlessly and with high quality, helping to bring the original vision of playwrights for such characters to fruition—in this case, hundreds of years after Shakespeare first staged The Tempest in 1611.