SMPTE Newswatch Masthead

Hot Button Discussion

Evolving into HDR Workflows
By Michael Goldman  

The broadcast industry’s ongoing push toward incorporating 4K/high-dynamic-range (HDR) imaging capabilities into workflows has led to stunning advancements, giving content creators the ability to more efficiently create such imagery and consumers the ability to view it more easily. However, suggests Mike Waidson, an application engineer for the Video Business Division of Tektronix, the United States has, for now, settled into a hybrid situation whereby traditional broadcasts of HD/4K/HDR content are possible, but the challenge of how best to deliver this array of content to the home remains significant. For this reason, 4K/HDR content had initially been the province of streaming services. 
 
“Certain countries have already started to broadcast 4K/ultra-high-definition [UHD] and [a certain amount of] HDR content,” Waidson relates. “If you look at the UK—British Telecom and Sky TV have already been broadcasting 4K/UHD and some HDR content. Closer to home, other companies, like Rogers Cable, have started to offer 4K for live sports and movies in Canada, while DirecTV this year is offering 4K and some HDR content, as well. So, in certain areas, the foot is in the water in terms of [routinely broadcasting in 4K UHD/HDR]. However, there are still some challenges to overcome in a transmission system, and cable infrastructure used to carrying ATSC HD content, which may constrain the ability to go to 4K/UHD and HDR.
 
“Many may feel that if you have an HDTV, and I show you 4K/UHD TV, or up-converted content from HD to 4K, most customers would have a hard time discerning the difference between what was true 4K/UHD and what was true HD. That’s because many of today’s televisions/displays have very good up-converters and rate conversion, so you can get an excellent image doing this kind of processing.
 
“Streaming services like Amazon and Netflix are already offering 4K/UHD content, since their workflow and delivery of content allows them to take advantage of compression technologies and increased bandwidth to the home. While the amount of data for 4K/UHD may be larger, the streaming files are processed similarly, and a 4K streaming device or television can then view the 4K/UHD or HDR content.”

Therefore, since there are some constraints in bandwidth and transmission processes for 4K/UHD and delivery to the home, Waidson suggests “the impact of 4K/UHD on its own may not justify the transition. The trifecta of 4K, HDR, and wide color gamut (WCG), however, provides a ‘window on the world,’ bringing more realism and vivid colors to the viewing experience. So all of these three features together have a high impact for the viewer, with images producing bright specular highlights, smooth shadow detail, and vivid colors. Broadcasters have been carrying out trials of 4K/UHD and HDR with Wide Color Gamut, such as the recent Winter Olympics, and I expect that over the course of this next year, more content will become available. At this moment in time, we have two extremes for viewers, with people streaming lower resolution content to their phone or tablet on the one hand, and on the other, we have larger TV sets/displays with larger resolutions. Many of these high-end displays are offering HDR and wider color gamut with increased brightness. The impact of 4K/UHD, HDR, and WCG can produce stunning images that bring the content to life and make it appear as though what we have been watching in Standard Dynamic Range (SDR) was in a cloud or fog.”
 
For live events, on the front end, Waidson emphasizes that modern digital cameras “already have really wide dynamic range.” Therefore, it can be relatively straightforward “to capture live material in a log curve for 4K/UHD directly from the camera, and then use various LUTs to convert to HDR or SDR, then down-convert to HD or whatever format is being broadcast.” At that point, he adds, content creators are making choices about what dynamic range format they prefer to use for capture.
 
“If you choose a Sony camera, then those cameras typically are using one of a series of S-Log curves—today’s high-end cameras primarily use the S-Log3 curve,” he explains. “If you are using a Grass Valley camera, then they provide you with a hybrid log gamma [HLG] curve or a PQ [Perceptual Quantizer] curve directly. Typically, these types of cameras also can provide a standard dynamic range output simultaneously. They do this in the camera control unit or other devices, giving you an ability to have multiple outputs. So that means an SDR or HDR output can come directly from the camera. Alternatively, the log curve output from the camera can be passed through a converter to produce SDR 4K. Another converter can take the same log curve from the camera and produce HDR 4K. Then, a down-converter can be used to produce an HD version of the content, in either SDR or HDR.”

Changes in workflow to the capture and processing of 4K UHD/HDR images are “somewhat simpler,” he adds, than they were when the industry tried 3D live events. “Those required two trucks, one to process the 3D content, and the other truck to produce 2D content,” he adds. “Today, it’s relatively easy for a 4K truck to do HDR, SDR, HD/SDR, or HD/HDR. Obviously, you need the uplinks for 4K/UHD, which require higher data rates for encoding 4K content, as well as your regular HD-SDR transmission path. While the workflows continue to be optimized, ultimately, you could switch everything to 4K as a workflow, uplink 4K to the network, and then let the network provide HD and 4K/UHD outputs simultaneously. Once you have invested in a 4K truck, then you can do multiple workflows.”
 
At this point, Waidson emphasizes, the basic flavors of standardized HDR are well established within the industry. SMPTE’s ST 2084, which is built on the foundation of Dolby’s Perceptual Quantizer (PQ) curve, and ARIB’s STD-B.67 standard, evolving out of the Hybrid Log Gamma (HLG) standard from the BBC and NHK are the two primary approaches. Two years ago Recommendation ITU-R BT.2100 was approved; this International Standard documents image parameters for HDR television, and includes specifications for both PQ and HLG.
 
But Waidson says there have been “new developments” with various HDR approaches regarding refining workflows for live events. “There have been a variety of different tests with people evaluating HDR for live broadcast events, such as sports. Over the last six months, there have been several of these tests to prove the workflow compared to an SDR workflow. I’ve been at a couple of those tests and seen the difference between a 4K/SDR image and a 4K/HDR image. The HDR experience was much more bright, vivid—you could see the shadows and details. Things that would have been clipped or not visible on the SDR image were now more intense. Those tests made me feel we will certainly start to see the creation and availability of more HDR events this year.”
 
Still, Waidson adds “while the standards are coming along, they are not necessarily all homogenized into one complete suite. So there is still work to be done in developing the standards further. Another document from the [Digital Production Partnership—DPP] in the UK, for instance, offers recommendations for indoor HDR and outdoor HDR—helping to establish some guidelines for HDR live events versus studio settings.”
 
“Therefore, I have seen Hybrid Log Gamma frequently used for live events, with PQ in the mix for streaming services,” he adds. For UHD Blu-ray, HDR10 defines using PQ for authored content. While in Japan there is a Blu-ray recordable format for HLG.

Further, SMPTE’s new ST 2094 suite of documents delves deeper into the HDR universe by addressing dynamic metadata for color volume transformation, he adds. As defined in SMPTE a “color volume”  is the three-dimensional palette of all reproducible colors at all allowable intensities in a given display. This three-dimensional representation subsumes the notion of HDR and WCG as it defines both the color gamut and the dynamic range of a display. The goal of this initiative was to standardize a process for adapting images to best suit a particular display’s color volume, he says.

“Obviously, PQ is based on the work that Dolby did, and that led to their process, Dolby Vision,” he explains. “This can carry additional metadata within the stream to adapt the television display to optimize the content to the target color volume on a scene by scene or even frame by frame basis. One of the issues is that some TV sets have a 500 nit color volume, while others have a 1,000 color volume, and others still have a 1,200 color volume, and the values keep improving with new models. So the question becomes when you get your HDR content, how can you best optimize it for your display device? The ST 2094 metadata can be sent along with the image data and would assist the display in processing the image to optimize it for your display’s particular color volume.”
 
In any case, beyond various workflow tweaks, there are practical, creative considerations that artists have to take into account when capturing and processing 4K/UHD/HDR material, including issues involving the white level that changes from the traditional values that colorist, editors, and operators are accustomed.
 
“That is one of the differences relative to the workflow,” he says. “Each of the curves uses the same bits, so the same digital values are used to represent the video signal. However, the bits are used more efficiently for the various log curves from S-Log3, Log-C, C-Log, V-Log to a PQ curve or HLG curve. What that means in the workflow is that the white level changes. When we are talking about SDR, our white level is obviously 100%,  represented by a 10-bit code value of 940. Then, when we move to S-Log3, our white level moves from 100% to somewhere around 61%. This is a change as far as what a colorist or camera operator is used to gauging for the white level in a scene. Typically, the 90% white level and the 18% reflectance levels become, within a scene, [reference for] where to set the levels. And then, we want a certain amount of the image going above that white level to create specular highlights. With these curves, we give more bits to the dark greys and blacks in the lower region that produce smoother blacks. So, although we have a white level at 61% and our gray point is somewhere in the 41% range, that means that from the 18% gray point to the black level of a code value of 95 in S-Log3—that is the range used for the standard dynamic range of the image to 100 nits. Then, the level above the white level is used for the specular highlights, which create the overall dynamic range in the image.”

Thus, one of the key issues facing colorists and camera operators working on high dynamic range imagery is to become familiar with these different white levels and different grade levels,” he suggests. “Each of the curves, whether it is a camera curve or an HLG or PQ curve, has a different value where white and 18% gray would be. One of the challenges is being able to recognize the type of curve being used since there is currently no metadata to assist in determining the format.”
 
The challenge at the moment is in developing HDR and SDR processes, this can lead to productions to primarily working within the constraints of an SDR-style workflow, according to Waidson.
 
“The camera operator at live events will often still adjust the camera based on the white levels of the SDR image,”  he continues. “That is because of the complexity with having to deal with all the different white points of these log curves, and because, today, they are providing that content to a majority of customers who are watching SDR images. Eventually, the workflow will adapt to optimize the HDR workflow, and determine the appropriate white level. But, for right now, the workflow is still predominantly an SDR kind of workflow for live events, with a technical operator monitoring the HDR to make sure it is the best it can be. As more viewers start watching HDR content, that will change so that adjustments happen more within the HDR domain and less in the SDR domain.
 
“So some challenges remain, such as getting used to different curves from various cameras and mastering in HLG or PQ; determining how much of the content contains the specular highlights; to where your white level and 18% gray points are within the image. Still, the benefits of 4K/UHD with high dynamic range and wide color gamut are that they can bring images to life and produce astounding imagery from details in the shadows that otherwise would have been crushed, to bright specular highlights that would have otherwise been clipped, giving the overall image its dynamic range.”

 

News Briefs
SMPTE Student Award Scholarship

SMPTE recently announced it is now accepting applications for its 2018 Student Paper Award and the Louis F. Wolf Jr. Memorial Scholarship. Both honors are designed to help students work within the motion-imaging industry and meet and collaborate with industry leaders. The Student Paper Award recognizes an outstanding paper about technical issues in the motion-picture, TV, or photography fields, submitted by a SMPTE student member. The winning paper will be published in an upcoming issue of the SMPTE Motion Imaging Journal. Students can explore submission requirements here. The Louis F. Wolf Jr. scholarship, meanwhile, provides $5,000 toward the cost of tuition at the winning student’s university. It is open to SMPTE student members who are full-time undergraduate, or graduate students enrolled in an accredited university and are majoring in engineering, science, or technologies or theories involving motion-imaging, sound, metadata, or workflows. The application form can be found here. More information about all of SMPTE's initiatives for students may be found here.

AI Fears
A recent report from a coalition of technology, academic, and public interest organizations warns of the looming potential for harmful consequences coming out of the artificial intelligence (AI) revolution. As explained in an article from E-Commerce Times, people and organizations intent on causing harm will be able to “expand the scale and efficiency of their attacks,” according to the report, by using AI technology. Among the malicious actions that AI can help facilitate, the report says, is the ability to interfere with operations of drone systems, driverless cars, and to make easier the manipulation of social media and the invasion of privacy. The report suggests that sophisticated algorithms that are part of the AI paradigm could allow bad actors to more accurately analyze the behavior and beliefs of consumers or specific people or organizations to determine the best way to disrupt or misdirect them. The report was put together by members of several organizations, including the Electronic Frontier Foundation, the Future of Humanity Institute, the University of Oxford, University of Cambridge, Center for New American Security, OpenAI, and others.  

History of the Crane
A comprehensive article from the ProVideo Coalition recently detailed the fascinating history of the camera crane. The report, by Richard Wirth, details evolution of the crane from a cumbersome tool requiring colossal human resources to try and get new kinds of shots to the state-of-the-art specialty tool for filmmaking that it has become today. It reminds us that the first crane shot in cinema history is accepted to be a shot from the 1916 silent epic from D.W. Griffith, Intolerance, which required engineers to help the director figure out a way to move his camera to the top of a gigantic Babylonian set and then return to ground level. Essentially, the article states, filmmakers, built a “large tower” on a railroad track “spanning two railway wagons” with an elevator in it to acquire the shot that, the article suggests, drew “audible gasps” from audiences when first seen on a cinema screen. From there, the entire history of cinematic cranes is detailed, right up to the modern day.