Annie Chang of Walt Disney moderated a broad panel on this hot button topic. For many attendees HDR and wide-color gamut are more important than 4K resolution to image quality -- but they have received much less attention in the media and from consumers. Matthew Goldman, from Ericsson, started us off by defining HDR -- first as one of the five key technologies of UltraHD. However, he said many people use HDR (or HDR+) to mean High Dynamic Range plus Wide Color Gamut (WGC), and 10-bit sampling. Whichever of these definitions is used, he wanted to make it clear that it was not resolution dependent, and specifically does not require 4K resolution. That's particularly important in mobile where small screens might still be able to take advantage of HDR. While it might go without saying, Goldman also reiterated that HDR doesn't mean a brighter display -- it means more contrast. Using charts of color gamuts for both HD (Rec 709) & UltraHD (Rec. 2020) Goldman demonstrated that the combination of HDR & WGC were closely-linked in creating a better image.

Michael Zink, from Warner Bros., walked us through an analysis of the movie LEGO in HDR, and showed that high light levels were only used ina small number of frames, and only for certain highlights. So the HDR version of the movie wasn't overall significantly brighter than the SDR version.

Steven Robertson, Google software engineer, pointed out that content streamed to computers and devices have some major advantages in adapting to new technologies. As devices happen to come with higher-resolution or wider-gamut displays the content can be automatically adapted as it is delivered. This is particularly relevant since many of the latest phones have very similar quality displays to the high-end TVs. Robin Atkins, from Dolby Labs, agreed with Roberson, and said they have found that Vudu and other OTT services were able to incorporate new technologies very rapidly since they had end to end control over their content delivery.

Panelists agreed that another challenge was viewing conditions, both optimizing them, and using metadata to adapt the content stream to adapt to them where it is possible to determine what they are. Atkins also walked us through how dynamic (content-dependent) metadata can be used to assist in mapping the color volume (gamut and tone) of the input scene to particular displays. For example, some scenes might be designed to be dark (perhaps a night-time scene), and shouldn't be mapped into a more typical rendering, while others (perhaps a discussion between characters where facial expressions are important) might be designed to always be easily viewable and should be remapped. There are already over 50 titles available for streaming that use Dolby Vision's "Smart Content" dynamic metadata.