Higher Resolution, Higher Frame Rate, and Better Pixels in Context The Visual Quality Improvement Each Can Offer, and at What Cost

The views and opinions expressed in this article are those of the author and do not necessarily reflect the position of the Society of Motion Picture and Television Engineers - SMPTE.

This paper was originally presented by Mark Schubin at the 2014 HPA Tech Retreat.

Now that high-definition television (HDTV) has penetrated a majority of homes and TV sets in the U.S., manufacturers, program distributors, and producers are planning what comes next. It’s generally called ultra-high-definition television (UHDTV), but the term encompasses not only higher spatial definition or resolution, but also higher frame rates, immersive sound, and higher dynamic rage and wider color gamut, the last two sometimes collectively called “better” pixels. This paper, adapted from a presentation at the 2014 Hollywood Post Alliance Tech Retreat, will outline and compare findings that bring the costs and benefits of higher resolution, higher frame rate, and better pixels into sharper focus.

Current and Evolving Formats: High-Definition (HD), 4K, and Ultra HD (UHD)
HDTV (1080i or 1080p) as we know it refers to a static spatial image resolution of 1920 pixels across by 1080 down. The 4K format discussed today changes the descriptive number from vertical to horizontal and measures 4096 pixels across. Though the term 4K suggests pixel counts of 4,000 or 4,096, much of the so-called 4K equipment on the market actually is 3840 by 2160, which is four times HD, or twice HDTV vertically and horizontally. Given the potential legal issues that can arise from this misnomer, the term “UHD“ is often being applied to such products instead.

The UHD-1 specification embraces the “4K” resolution of 3840 by 2160. Rather than address just resolution, however, UHD also can offer higher frame rates (HFR), higher dynamic range (HDR), wider color gamut (WCG), immersive sound, and possibly even stereoscopic 3D. This paper examines the findings surrounding the use of higher resolutions such as 4K, and then considers those findings along with HFR, HDR, and WCG and the degree to which they can improve perceived video quality.

Working With Higher Resolutions
Within the motion imaging industry, 4K has been a topic of discussion since as early as 2000. In fact, its exploration goes back even further, with the benefits of higher-resolution image acquisition having been the focus of an earlier U.S. Department of Defense study that led Lockheed Martin to develop a true 4K camera — a 3-CCD camera with 12 megapixels on each CCD.1 In the media and entertainment realm, however, the Dalsa Origin, RED One, and ARRI D-21 were launched in 2007 and 2008 as some of the earliest professional 4K cameras for digital cinema. In 2011, The Girl With the Dragon Tattoo became the first large-scale theatrical presentation to be produced in 4K from beginning to end.

By definition, 20/20 vision represents a visual acuity of one arc minute. In a cinema auditorium, therefore, a viewer with 20/20 vision watching a 25-meter-wide screen can perceive 8k detail at a viewing distance of 5 meters, 4K at 18 meters, and 3k at 26 meters. In practice, however, electronic theatrical releases simply are not yet being shot, produced, or delivered at resolutions above 4K. Of the top 10 box office hits of 2013, a significant number were either shot on film or animated. Of the live-action movies shot electronically, the top two were shot with ARRI Alexa cameras with a usable horizontal resolution of 2880, less than 3k resolution. While other aspects of these cameras, such as HDR, may have contributed to the high visual quality of these pictures, higher resolution does not appear to be demanded by viewers.

Source material and viewing conditions — especially screen size and the distance of the viewer from the screen — are critical in examining the perceptual impact of 4K. In its research on 4K viewing in the home, the European Broadcast Union (EBU) examined viewing at two distances from a 56-inch screen: at 2.7 meters (approximately 9 feet), which was found by engineer Bernard Lechner to be the average viewing distance in a survey he did of some American TV viewers, and at 1.5 times the picture height, which in this case is almost 3.5 feet, making for very close viewing. The EBU found that there is a perceptual improvement over HD for people who are watching 4K on a 56-inch screen, and if they are watching from the Lechner distance, they get about a 1/3 grade improvement. If they are closer than 3.5 feet, viewers get about half a grade improvement. The small, yet statistically significant, improvement gained with the move from either 720p60 or 1080i30 to 4K requires eight times the pre- or post-compression data rate, or 16 times per grade.

The problem of dealing with high data rates (and longer processing times and greater storage requirements) is compounded by cable-length limitations. The move from SD-SDI to HD-SDI brought usable cable lengths down, and the shift toward 4K makes this a far more pressing issue, especially in multicamera environments. Consider, for example, a 4K production truck in which it is impossible to move data all the way from one end of the truck to the other without resorting to a new form of metallic cabling or optical fiber.

Despite these issues, higher resolutions of 4K and above (as their cameras permit) are delivering valuable benefits in production. In theatrical production, higher spatial resolutions provide the leeway to shoot images with a “look-around” pad and to crop the desired distribution resolution in post. This technique also allows for reframing and stabilization. In HD broadcast production, 4K is being used by major networks — particularly for sports programming — for the same reasons. In addition to providing more flexibility in creating the HD output, the higher resolution (oversampling) makes filtering easier and also allows single-sensor 3D. It also can be argued that the extra samples on the camera sensor slightly increase the sharpness of the HD picture.

Assessing the Impact of HFR
At the IBC2013 show, the EBU unveiled its viewer tests for HFR. What the organization found is that in going from 60 frames per second (fps) to 120 fps or from 120 fps to 240 fps — a doubling of the frame rate — it is possible to achieve a full grade of improvement. Whereas 4K resolution demands 16 times the data rate to achieve a single degree of improvement, HFR requires just twice the data rate to reach the same goal. The EBU also found that, at frame rates above 100, motion judder starts to go away, which means that improvements in dynamic resolution can be gained by using a 50 percent shutter in addition to increased frame rates.

Depending on the source material (e.g., content with substantial motion), the impact of HFR can be quite noticeable. In images of a moving train, one with HFR and one without, the stationary tracks and ties are equally sharp. The train itself, however, is much sharper in the picture with a higher frame rate. It is important to note that this difference — the sharpness provided by a higher frame rate — can be seen at distances much greater than those ideal for 4K viewing.

Furthermore, increased spatial resolution seems to demand increased temporal resolution. Consider a 720p60 HDTV signal, with 1280 active pixels per line. If the exposure time is one frame, then it would take just over 21 seconds for a vertical edge to traverse the screen without blurring. At 1080p60, it would be 32 seconds. At 3840-pixel 60-fps UHD, it would be 64 seconds, but it would be 32 seconds at 120 fps and lower at higher frame rates.

Examining the Effect of HDR
During one debate about whether there is a need to go beyond HD, it was suggested that a 4K display can offer the best possible HD pictures. However, actual comparison of the footage taken from a single ARRI Alexa camera and displayed on an HD HDR display and on a 4K display reveals that the HDR HD display provides apparently superior images.

HDR can increase picture detail by reducing clipping, and it also can offer the contrast that contributes to sharpness. (Though HDR and higher resolution both can increase sharpness, higher-resolution sensors and displays sometimes offer less contrast.) A modulation-transfer function shows available contrast as resolution increases. According to some researchers, the human psychovisual sensation of sharpness is proportional to the area under the curve; others make it the square of the area. In either case, increased contrast, at the “shoulder” of the curve, contributes more to the area than increased resolution at its “toe.” See Figure 1.

Figure 1


Figure 2 shows that with true HDR, where one starts with tremendously more contrast even at very low resolution, the effect on sharpness is even more notable.

Response to less formal HDR demos and surveys suggests that the difference is tremendous and the visual result spectacular. Thus, it seems likely that, from the consumer’s perspective, HDR will prove more desirable than higher resolution. Unfortunately, there is also a potential downside.

Viewers of HDR imagery sometimes report increased perception of motion judder. Filtering of temporally sampled information reduces the visibility of the judder; increasing brightness and contrast makes it more visible. Increased frame rate, therefore, might be necessary to accompany HDR.

What is the cost of the HDR improvement in terms of data rate? Theoretically, nothing.

Figure 2


Bit depth simply determines signal-to-noise ratio, about 6 dB/bit when at least half of the least significant bit (LSB) is noise. In practice, when noise is not carefully managed, a lack of bit depth causes contouring, or “stairsteps,” in the image. Contouring gets halved with each additional bit added, and it disappears when half the LSB matches or is less than noise. If the bit depth is raised to 12 bits rather than 10 bits, then a quarter of contouring will occur, and the data rate will increase only by a factor of 20 percent. A 20 percent increase in data rate is an increase that likely can be accommodated by existing facilities.

If HDR provides an improvement of even just half a grade (and preliminary evidence suggests it is much higher), then it would be an increase of 0.4 per grade. Demanding just 1.4 times the data rate for one grade of improvement, HDR appears to offer best value — the most amazing and sharp pictures at a data rate that the industry can manage (not counting the possible increased perceptibility of motion judder).

Increasing Color Gamut
Demonstrations by companies such as Genoa Color Technology have shown that the addition of primaries (cyan and yellow in this example) to a monitor can produce a significant difference in how certain colors appear. If the addition of subpixels is required to enhance color, however, it can come at a cost of display resolution. Other techniques can be used to increase color gamut, and they might not involve any increase in data rate.

Conclusion: 4K vs. Ultra HD?
After all the math is done, it is clear that the combination of higher resolution, HFR, and better pixels presents a significant challenge in terms of data rate. After all, 4K at 60 FPS currently uses 12G-SDI. While 4K seems to offer limited benefits under home viewing conditions, HFR, HDR, and WCG (along with immersive sound) yield benefits that extend into a wider (and more typical) range of viewing distances and environments. For the same degree of improvement, 4K data rates (pre- and post-compression) are eight times those of 720p or 1080i while the data rate increase for HFR is times two (or four if comparing to interlace), and the data rate increase for HDR and WCG is minimal.

Viewing tests seem to indicate that 4K offers the lowest perceptual improvement compared with HFR and HDR. It also seems to demand a higher frame rate, and it definitely requires the highest pre- and post-compression data rate. At 60 fps, 4K equipment currently uses 12G-SDI connections; at 120-fps, will it require 24G-SDI? If so, will it preclude higher frame rates? Those planning for the future should consider all aspects of UHD, not just static spatial resolution.


  1. Stephen A. Stough and William A. Hill, “High-Performance Electro-optic Camera Prototype,” SMPTE J, 110:14-146, March 2001.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the position of the Society of Motion Picture and Television Engineers - SMPTE.

Copyright 2014 the Society of Motion Picture and Television Engineers, Inc. (SMPTE). All rights reserved.