November 2012

SMPTE Newswatch Masthead
   
  5 November 2012
 

NW1Hot Button Discussion

Higher Frame Rates for 3D  

By Michael Goldman 

 

With help from high-profile thrusts in recent months by two major filmmakers--Peter Jackson and James Cameron--the subject of higher frame rate presentations for digital cinema, particularly 3D exhibition, has seen its profile rise dramatically lately. In fact, as this newsletter was being prepared, the topic was under discussion as a major focus at the SMPTE 2012 Symposium, and articles covering the subject in-depth have also appeared in the SMPTE Motion Imaging Journal. (Check out the Official SMPTE blog page for recent news coming out of the SMPTE 2012 Symposium on HFR developments.)

 

Jackson, of course, has committed his upcoming films, the The Hobbit: An Unexpected JourneyThe Hobbit: The Desolation of Smaug, and The Hobbit: There and Back Again, to being photographed at 48 frames per second (fps), and Cameron has announced plans to shoot two Avatar sequels possibly as high as 60 fps. Likewise, filmmaker Douglas Trumbull--the godfather of the high frame rate movement at the feature film level--also continues his lobbying efforts and has converted his original ShowScan process over to the digital realm in recent years, in an effort to further the industry discussion. These developments, combined with the fact that technology advancements have made both the capture and exhibition of 3D imagery at higher frame rates more feasible and creatively compelling than ever before, make it clear that the notion of HFR exhibition on a large scale is a concept that is gaining major traction.

 

"Two well-known directors alone [Jackson and Cameron] have announced five different 3D movie projects at higher frame rates to be released in coming years," emphasizes Wendy Aylsworth, senior vice president of technology at Warner Bros., SMPTE's current executive vice president, and the Society's incoming president for 2013. "They are doing it because they feel that [a higher frame rate] makes it easier for the viewer to process and enjoy 3D. The idea is that it is easier to get involved in 3D if your eyes are not straining as much--that allows the brain to process 3D with less strain. That seems to be consistent with what we are hearing from human visual system researchers right now."

The notion that stereo imagery is better processed by the human brain when the imagery is viewed at frame rates higher than 24 fps is at the heart of the work being done by Jackson, Cameron, and Trumbull. Aylsworth emphasizes that various psycho-visual studies over the years seem to support this theory. Such studies, in fact, were discussed at the SMPTE 2012 Symposium by Andrew B. Watson, Ph.D., of NASA, an expert on the topic.

 

The 21DC High Frame Rate (HFR) Study Group's recent report in the September issue of the SMPTE Motion Imaging Journal (available here from the SMPTE Digital Library) also discusses this phenomenon.

 

Thus, as Aylsworth suggests, such research and creative experimentation have convinced filmmakers like Jackson, Cameron, and others that it is easier for the brain to view and enjoy images at a higher frame rate because those images are closer to how the brain perceives the real world through the human eye. 

"Some [researchers] say that humans see images in real life much faster than 24 fps, so they are saying that the faster your brain can get the images, the faster it can process them, and the more realistic, closer to real life, 3D images can be," she adds.

 

On the other hand, for generations, filmgoers have grown used to watching 2D and 3D movies at 24 fps. What is ironic about this dichotomy, she says, is the fact that the term "higher frame rates" means higher than 24 fps, which itself, she reminds us, was an unscientific designation to begin with.

 

"They were using lots of frame rates back about 100 years ago," she says. "But when sound came along and started being put onto the film, they noticed sound artifacts at lower frame rates. While they needed to speed up the rate to get good sound, they didn't want to spend more money by having to print more feet of film. So, eventually, with the introduction of sound on film, they compromised on a standard of 24 fps that provided adequate sound and image. But now with digital [capture and distribution] starting to take over, no longer involving film processing, you don't have to worry about those costs. Filmmakers pushing high frame rates today understand this history, but argue that with those constraints now eliminated, the proper thing to do is go to the highest possible frame rate to add to the storytelling and make it more realistic."

Of course, as those filmmakers forge ahead, there remains the issue of how, when, and under what circumstances the majority of 3D-enabled cinemas can be retrofitted to exhibit movies at frame rates higher than 24 fps. Aylsworth also penned an article in the September issue of the Journal examining this topic and pointed out that it has only been in the past year that the exhibition industry has been upgrading projection facilities worldwide to address this issue. She prognosticated that "a slow roll-out" would continue through the end of the year that would permit blockbuster-type 3D films like Jackson's Hobbit movies to be shown at their intended, higher frame rates in select theaters.

 

Ongoing upgrades to improve projection systems in those theaters is possible generally, she adds, because some projection systems with integrated media blocks from major manufacturers only require a software download to move up their frame rate capacity. In other cases, servers are being replaced with integrated media blocks that can be plugged into existing projectors to permit HFR exhibition. In her article, Aylsworth also discusses how much reasonable performance manufacturers will be able to get out of existing projector designs via these upgrade paths. She points out that, for most modern systems from major manufacturers, an expectation of 60 fps exhibition is certainly reasonable. But, she adds in her article "as with all engineering changes, there are other subtle impacts that must be addressed, including watermarking, flash rates, and Digital Cinema Package (DCP) transitions in a Show PlayList (SPL)."

 

Playing HFR content in such theaters in the near term will therefore be possible, but not quick, simple, uniform, or seamless. Thus, even with optimism about how manufacturers are addressing those issues to make it possible in relatively short order for some cinemas to be able to project Jackson's and Cameron's movies as they intend, Aylsworth emphasizes it is still a short-term solution. Long-term, she explains, the 21DC HFR Study Group and others need to come to some sort of consensus about what the standard compression rate and the optimal frame-rate range should be for projection systems of the future. The study group is essentially taking a two-track approach right now; it is studying what is or is not feasible for existing, modern projectors, and what the standards/capabilities should be for future systems.

"That is one of the things [the study group] is looking at--what can current equipment accomplish, and at what compression rates," she explains. "And then, longer range, what could we reasonably expect future equipment to do within a reasonable cost margin, and at what compression rate. All content gets compressed to play out in a theater, otherwise the images would be too big. So the question becomes, what level of compression looks good on the big screen?"

 

Indeed, the aforementioned SG report alludes to possible methods of compression. Essentially, at its base level, the research suggests that the human eye is more sensitive to luminance than chrominance, meaning, the brain might, according to the report, "neglect larger changes in the chrominance without affecting our perception of the image. As a result, the maximum bandwidth required for the luminance component is typically higher than that required for chrominance components."

 

"Right now, in the standards currently published by SMPTE, the maximum compression rate is 250 Mbits/sec," Aylsworth adds. "But, for 3D, you have to double that, and for HFR, at 48 fps, you have to double that again. Keeping the same compression rate means you have twice the artifacts and that can be compounded even more with 4k images. Therefore, to have a higher, better quality image, you are going to have to put out two to four to eight times as many images every second. Those are key components [the study group] is looking at in terms of standards for [systems of the future]."

For the forseeable future, more theaters will be showing 2D, rather than 3D versions, and more will remain focused on showing movies at 24 fps, rather than at higher frame rates. That reality also presents a challenge to filmmakers as to how best to capture HFR material right now for present-day exhibition at 24 fps. In fact, capturing at 48 fps or higher can present problems in conforming material to be viewed at 24 fps in current exhibition settings. Aylsworth suggests that this may cause content creators significant problems trying to drop frames for an entire 3D movie, where imagery from two cameras, for two eyes, is required.
 
"Unlike 4k versus 2k, where you can bring the resolution down, in this case, if you shoot at 48 fps and drop every other frame to make it 24, those 24 frames don't have the same kind of motion blur in them that you would have if you shot the material natively at 24," she explains. "The best way to explain it is that every 24th of a second, the camera shutter opens and closes just a fraction before the end of that 24th of a second, and then it takes a new picture. If you take two pictures in that same 24th of a second, you are closing the shutter twice. Each of those images therefore gives you two pictures in the same space of time, so that means there is not much motion blur in those items. If you drop every frame, you get crisper images, but because there is no motion blur in the original take of that image, it looks too clean, crisp, and rigid. That gives it a clean look, and yet, humans are accustomed to seeing video images with motion blur. If you stop a frame on a DVD while watching a movie, the car driving in the background is all blurred. That is the motion blur we are used to. Therefore, no one wants to shoot 48 fps but show it at 24. They want to shoot at 48 so they can show it at 48."

As with many other issues related to new ways of digitally capturing and exhibiting imagery, economic conditions within the industry will determine how far, how soon, and how completely the movement toward higher frame rate capture and exhibition goes. There is no question, that, on the technology side, a serious push has been mounted to make HFR filmmaking feasible for 3D shows. That push, coming out of the projects mounted by Jackson, Cameron, and others, is resulting in more significant research and development for manufacturers and the 21DC HFR study group to consider.

Even then, industry watchers don't expect any magical, one-size-fits-all HFR standard that will fit neatly into all creative situations and aesthetic preferences. Aylsworth points out, for instance, that HFR for digital presentations will never look exactly like HFR on film. And with the filmgoing public so used to viewing 24 fps imagery anyway, there will no doubt be debates about what looks better and who prefers what.

 

Such aesthetic debates within the creative community are destined to never end, nor should they, Aylsworth suggests.

 

"There will be as many arguments about this as there were about the different looks of film, color film, black-and-white, different stocks, saturated color or muted, resolution, 2D compared to 3D, and so on," she says. "Different frame rates will have a different look and feel. A lot of people will find that wonderful, and others will insist it doesn't look right or isn't filmic enough. Once again, it is about introducing a different palette. In the end, people will stick with what is best for their particular story within the economic conditions of their time, and that's great--it would be really boring if we all agreed on these things."

"Those filmmakers are pioneering--a lot of what they are doing helps feed information to engineers about what can, and cannot, be standardized," Aylsworth suggests. "The first products to market may have aspects that are not standardized because it is too early to have standards, and those movies will involve workarounds to make it feasible for right now.  Hopefully, in the future, we will have standards for how to capture and project this material that will result in a more pleasing [3D viewing experience]. When you think about it, all technology innovation comes from someone trying something first and then, when you try to make it more ubiquitous, you create interface standards. That is usually how it works, and I'm sure that will be the case here."

NW2
News Briefs

Documenting Endeavour 

The painstakingly slow, 12-mile final journey of the Space Shuttle Endeavour through the streets of Los Angeles made big headlines in that city recently, and after the massive operation was finally complete, a stunning time-lapse video of the event created by Los Angeles Times photographer Bryan Chan wowed viewers across the Internet. Chan recently posted an interesting blog piece along with the video, which photography buffs will find fascinating about the process of shooting the Shuttle's march from Los Angeles International Airport to the California Science Center. Before that event, however, the Shuttle had to get to Los Angeles on the back of a Boeing 747. For that effort, a volunteer cinematography team from the Society of Camera Operators (SOC) was on hand, to document the ship's landing at LAX. The SOC team, in a project run by Terbine Entertainment, included legendary cinematographer Haskall Wexler as one of its operators, and a state-of-the-art tapeless workflow that relied on Ki Pro Mini video cameras from AJA Video Systems mounted on three Panavision Genesis cameras using Noga Arms. Wexler and other operators filmed the Shuttle's flyover and landing at LAX using that technology, sent the footage to Deluxe Laboratories, which mastered the Apple ProRes 422 formatted footage free of charge and contributed it to Terbine's project to make a documentary about the Shuttle's final journey. That footage that will be combined with other video footage and still photography for a documentary that will eventually play at the upcoming Endeavour exhibit at the California Science Center. Here is  a report on the SOC effort.

 

NW3

The Rise of IPTV

The global economic slowdown has apparently not interfered much with the unrelenting march of IPTV. According to aninteresting article in Broadcast Engineering magazine recently, major IPTV growth in China and Russia, going hand-in-hand with increased broadband penetration generally, has spurred global growth in recent months. Even where the trend has slowed, in places like France and the U.S., gains are continuing at a smaller percentage while cable and satellite growth is declining. On the other hand, Russia had a remarkable 17% IPTV subscription growth in the last quarter, China had 7% growth, and even smaller countries, like the Netherlands and Germany, increased substantially. As a result of such trends, the article suggests industry experts are now calling for additional research to confirm the contention that IP has grown into being one of the primary ways to distribute broadcast images across the globe.

 

NW4 

More Control for Light Field Cameras

Since SMPTE Newswatch first discussed advances with light field still cameras in January 2012, both the art and science of such systems has advanced. Time Magazine's "Technologizer" column reported in October that one of the leading manufacturers in that field, Lytro, has now rolled out manual settings for its Lytro light-field camera's software, which debuted last year. The idea is to give users more control over both shooting and post-producing their images. Light field cameras capture direction of light in shots, and allow re-focusing and other manipulations of imagery captured in the field using proprietary software in post-manipulations that, normally, would not be possible with other camera systems once the picture has been snapped. The Time article says the addition of new manual controls on the camera itself was done to allow even more shutter speed and neutral density filter control while taking the picture, thus controlling how much light is captured. The article suggests that the primary users of such technology will be consumers and professionals who want carefully composed photos to which they can add effects. The technology's video potential however remains significant if manufacturers can figure out how to put additional processing power into the compact cameras.