Individual members - don't forget to link your account to an IEEE account to access the SMPTE Journal, and any library discounts. Creating an account is free. Log in at My Account at smpte.org to get instructions.
Hot Button Discussion
Technology Trends 2012
By Michael Goldman
This year, as always, it was popular following the annual Consumer Electronics' Show (CES) for the blogosphere to analyze "top technology trends" heading into 2012—smartphones, tablets, connected TV, and the like. However, in the content creation industry, the big trends heading into 2012 are more esoteric—concepts that will impact consumers without them ever knowing, or caring, much about them.
That's because, as SMPTE President Peter Ludé suggests, all the amazing new ways for consumers to download, display, and manipulate content more sharply in the home or on handheld devices are beside the point unless content creators keep developing newer/better ways to make, format, and distribute that content for all those platforms in the first place.
"(Broadcast industry veteran, journalist, SMPTE Fellow) Mark Schubin is fond of saying this is about year 27 of 'the year of HD,' " Ludé chuckles. "That is how it goes in our industry. Trends for us are not so much revolutionary as evolutionary. What will be 'hot' for our industry in 2012 are more things we are already involved with—just moving them further along, making them more practical and useful."
With that in mind, SMPTE Newswatch recently chatted with Ludé, who also serves as Sr. VP of Engineering at Sony Electronics, and colleague Wendy Aylsworth, SMPTE Executive VP and Senior VP of Technology at Warner Bros. Technical Operations, about the nature and direction of some of the key trends they see rising to the fore in 2012 that will have a significant impact on content creators.
Both Ludé and Aylsworth feel the next logical step for 4k imagery is the creation of newer/better/easier/more affordable and accessible workflows for capturing 4k imagery and moving it along a production chain.
As Ludé points out, "4k is now in over ten thousand movie theaters—a standard for six years now. What's different is that now, on the capture side, we have 4k cameras. In just the past few months, Sony's new F65 (CineAlta 4k Camera system) gave us an 8k 20-megapixel CMOS sensor that can create a true 4k image with wider color space and greater dynamic range, and output the image for a 4k workflow (see the April 2011 issue of SMPTE Newswatch). On the consumer side, we have 4k televisions from several manufacturers, there is talk about Blu-ray discs going 4k, and we have home theater projectors in 4k now. At CES this year, even 4k consumer camcorders were being introduced. So that means there is potential for viewing (true) 4k in a home environment. On the other hand, there still isn't much content out there. That could start to change during 2012 as broadcasters begin experimenting with 4k production, including on the 2012 London Olympics. I believe that over the next three to five years, you will start to see much more 4k content."
High Frame Rates
As discussed in the September 2011 issue of SMPTE Newswatch, the ability to capture imagery at 60 fps and other frame rates with digital cameras has been around for a long time. But with the involvement of major filmmakers like James Cameron and Peter Jackson, among others, high-end feature films are now being developed at newer/higher frame rates, even if the issue of how exactly to widely display them at the frame rates the filmmakers intended has not yet been fully hashed out. Aylsworth calls the movies being made by these filmmakers at higher frame rates "key experiments to deliver high frame rates theatrically. If they are successful, we will probably see increased interest in other channels, as well. That is a challenge for the theatrical market, but fortunately, digital technologies are opening the way for that greatly right now."
Those "other channels" include broadcast, obviously, and broadcasters are already starting to capture entire sporting events at 60 fps. Both Ludé and Aylsworth, who are deeply involved in SMPTE's work on higher frame rates for 3D cinema specifically, say there is also talk beginning about how best to get 1080p/60 fps imagery into consumer homes. There is a long way to go before that starts happening, but both Ludé and Aylsworth are optimistic that strides will be made in the near future.
"It will emerge, it's a big exciting opportunity thanks to prominent filmmakers committing to do 3D cinema releases at frame rates higher than 24," Ludé says. "The interesting thing is how digital technology has enabled this. I heard from (visual effects pioneer/filmmaker) Douglas Trumbull that he was shooting at 120 fps (using a new version of his famed Showscan system called Showscan Digital) with a 360-degree shutter, which was impossible in the days of (film acquisition). In the 3D world, this raises an interesting question about the relationship of higher frame rates on temporal fidelity, flicker visibility, and perceived motion, and how that benefits stereoscopic 3D. Right now, we still don't have all those answers."
For those attending NAB this coming April, SMPTE will be spearheading discussions on the growing importance of the frame rate paradigm shift at the show's Technology Summit on Cinema (formerly DCS) event.
And the stereoscopic phenomenon isn't about to go away, Ludé and Aylsworth agree. The big movement, they suggest, could be into what Ludé calls "efficient or low-cost 3D production"—widening the stereo net, in other words.
"I think 2012 will be the year that producers really learn how to make high-quality 3D at a budget that is at, or near, the same as regular HD production," Ludé says. "Existing cameras, workflows, and processing tools will all improve. This will require new production approaches and efficient technology, but as we've seen recently, the cameras, rigs and workflows are all becoming optimized for this kind of work. What they need to do now is have better quality control standards. Quality assurance for 3D is tricky—how do you quantify edge violations and vertical disparity to determine how good is 'good enough'? As we develop more efficient workflows for 3D TV production, we will get a lot smarter about how to assess quality, based on some (new) metrology."
Ludé suggests that, eventually, 3D camera rigs for budget-conscious shooting will become simpler. Some manufacturers like Meduza Systems are already placing two imaging sensors into a single camera body, although achieving the best width between lenses for optimal interaxial distance remains a challenge. Ludé also points to research being done on time-of-flight systems for stereoscopic capture, which measure the time it takes infrared light pulses to travel between a camera and its subject so that independent images can be depth mapped and put together later in post as 3D imagery.
"These are approaches to capturing depth by using a single camera for visual spectrum imager composition, color and texture, and other sensors to calculate and measure depth," he says. "Time of flight might be that other sensor. There are already fascinating Light Field still cameras (such as Lytro and Raytrix), and they build still images you can re-focus in post. Five years ago, it would have been crazy to suggest such a thing. The big question is whether they have practical applications for movies."
All these advances in 4k, frame rates, and 3D, plus the maturation of DSLR technology, are advances that take advantage of single sensor approaches. This trend toward the general use of single-sensor cameras will likely continue, but Ludé points out, the trend is itself dependent on the continued march forward of microprocessing power. The latest iteration of this phenomenon of interest to the content creation community is the Canon Cinema EOS system, which was introduced late last year and is, essentially, a video camera with a powerful, single 35mm sensor inside.
"The new generation of 35mm image sensors in cameras like the Canon EOS and Sony F3 enable creative imaging and the use of high-quality cinema lenses," according to Ludé. "But it also creates new requirements for handling workflow. In the past, a signal came out of a single coaxial cable, and you ran it to a recording device. Now, it is not an image—that signal may be extracted as a raw file, without gamma or color correction, which adds substantial flexibility, but requires new steps in image processing during post. It's not necessarily hard to do each individual step, but there are so many variables and different approaches—it's important to establish flexible production workflow tools. The difficulty is that you need to process it before you have any viewable picture, whereas before you only had to plug-in a single cable for that purpose. So processing has to keep improving. No one would have ever even made a CMOS sensor capable of so many pixels as we have today were it not for improvements in microprocessors."
Ludé and Aylsworth are also excited about the rapid rate of improvement in storage options for media professionals. But Ludé points out that while pure IT-related developments have made RAID discs grow exponentially in recent years, "it has always been a particular challenge to handle the unique requirements of motion images in IT-based commodity storage devices. But we have made progress in that with new file systems and new architectures to optimize disc arrays to handle higher bit rates. We now have memory/RAM on Solid State drives, and companies like Fusion-IO are creating systems to allow huge amounts of storage to go directly to the mother board or BUS of a PC, eliminating some of the choke points that have historically made media storage difficult."
Creating better network interfaces and integrating such systems into typical workflows are things Ludé expects to improve in coming months, as options are increasing at a rapid rate for those creating and processing digital content. Long-term storage for archival purposes, however, has proven trickier as virtually the entire industry still insists on backing up content onto some kind of physical media, such as data tape, film negatives, or both in many cases. That's for the simple reason that no one knows exactly how long data can survive and be reclaimable with existing technologies.
Aylsworth reminds that stable digital storage mediums exist and others are coming that make the day of archiving long-term without physical assets not as far away as once believed. Tests, according to Aylsworth, indicated a few years ago that the holographic optical discs and cubes developed by companies like InPhase Technologies, for instance, were quite stable in terms of data retention.
"But as they continued to work on it, they could never get the read/write rates to where the media was that efficient," she explains. "It was a great idea, and I laud those companies that looked into it. But they could never get the speed of creating those discs and reading them back to be efficient enough. People are now looking into other methods, but nothing is proven yet."
She adds that while the IT-based cloud storage paradigm shift that is impacting business and commerce generally has numerous media applications (as covered in the December 2011 issue of SMPTE Newswatch), content creators and owners need more security, efficiency, and creative tools, which remote servers alone cannot provide. And since the entire paradigm requires electricity, she expects that physical media will remain relevant in the archival/asset protection world for quite some time to come.
"Some people espouse the notion that storage is strategic—just keep it all online," she says. "But electricity is not limitless, and you can have outages. And besides, for most (studios) these assets are their corporate jewels—they still want some sort of physical representation of them. Even all-digital movies—most studios still put the final version onto color film and three-strip for archiving. And dailies, they are usually saving to LTO (data) tape. But, even then, you need a scheme for how often to refresh those, bring them online, and write them back out again."
All that said, almost all of these trends involve data and data workflows as a common theme. Further, it's clear that, as Ludé puts it, "people will continue to shoot less on film generally in 2012, their workflows will be more file-based, and there will be more ways to watch content. Therefore, source material will require being processed into many more versions."
That means the work the industry is doing to standardize file formats in various categories— particularly the Interoperable Master Format (IMF) process—are of crucial importance. As discussed in the November 2011 issue of SMPTE Newswatch, standardization methodologies have improved in recent months and will likely continue to improve, so that as new technological developments change the landscape, the industry will have a better chance to keep pace by producing standardized file formats in an efficient manner.
Aylsworth calls this the "non-sexy back office stuff, but its really important that content owners and developers have a master file format. It will make a lot of these technologies more (feasible) in coming months and years."
In fact, she and Ludé fully expect, as Ludé puts it, that "2012 will become the year that people embrace IMF, and that will help lead us to better archive workflows to preserve pictures and metadata, and so, eventually, we will have data in a format that possibly will be able to last 100 years."
And More ...
There are dozens of other trends that will impact the industry in 2012 and beyond, of course, but two, in particular, are noteworthy, according to both Ludé and Aylsworth—green technologies for production and post, and improved accessibility technologies for the sight and hearing impaired.
For accessibility, SMPTE has already created an improved closed-captioning standard to bridge access to captioning of video material from the broadcast realm into broadband video across the Web.
"One of my 2012 predictions is that this will be the year that engineers accelerate work to improve the world of captioning and descriptive text for the vision and hearing impaired community," Ludé suggests. "It's become vital to assure accessibility across all the platforms where consumers enjoy content, not just broadcast television."
As it relates to saving power, they point to a couple of interesting developments. Ludé emphasizes that cinemas are exploring more efficient Solid State laser light projector technologies (which are still several years away), and productions are switching to more efficient lighting systems on set, including LED lighting technology. Secondly, he says the IT industry is working toward new energy distribution standards for server rooms and data centers by switching from 110-volt AC outlets to 380-volt DC outlets (see the EMerge Alliance Web site for more). "Early investigations show that you can save up to 20% power this way, which if you are running about 20,000 CPU's in a rendering farm, can be huge," Ludé explains.
Ironically, as Aylsworth points out, more energy efficient lighting has the potential to be picked up by digital imaging sensors differently. And so, she emphasizes, "every change somewhere has some ramification somewhere else. As companies come up with more efficient ways of doing production, we will have to see if that will have any impact on the standards' side."
Industry blogger Matt Jeppsen posted an interesting column recently at the Pro Video Coalition site examining the Lytro light field still camera system and the wider arena of this technology, pondering its potential applications for media professionals. He points out that Lytro officials say their technology, indeed, has potential video applications and that, right now, the only real limitation is the engineering issue of putting massive processing power into small camera bodies. Jeppsen discusses other challenges, but says he thinks the technology does have potential to capture light information and stereoscopic information to be manipulated in post by media professionals who are already manipulating raw image data these days anyway. So adding focus and 3D image data manipulation to that mix, he speculates, might have interesting potential. He also offers links and video for more information on the topic.
A new survey by global management consulting and research company Accenture suggests that the era of traditional broadcast and cable television as the primary technology for entertaining and informing consumers in their homes has come and gone. The survey, dubbed "Global Consumer Electronics Products and Services Usage Report," found that the percentage of consumers who watch television in a typical week has fallen dramatically in just two years to less than 50% of the population, and similarly, that the percentage of consumers expecting to buy TV's in the next year has also declined from previous surveys. Instead, as one might expect, consumers are turning as never before to mobile devices—smart phones and tablets, in particular, along with online material and downloaded material instead of TV programming. Experts in the survey suggest that consumers want and need "screens," but are differentiating between television and computers/tablets/phones less and less, with all forms of mobile devices skyrocketing in popularity.
The TechNewsWorld website recently ran a fascinating article about new research being done with something called "blind quantum computing" that scientists think could eventually provide the foundation for applying ultra-secure layers to protect data housed in cloud-based systems. The article discusses work done at the University of Vienna and analyzed in the journal Science. The research, according to the article, involves the idea that data can be secure and highly encrypted if the computer processing the data has no information about the input or the method of performing computations on that input or the eventual output of the data. It will take a while, though, because just demonstrating the concept, according to the article, required researchers to use lasers, optical fibers, lenses, crystals, mirrors, and other gear to create "blind" photons and then entangle them to others to encrypt data, and then they used highly sophisticated algorithms to untangle the whole thing. The article suggests that the methodology, once perfected—which researchers expect could take between 10 and 20 years to develop in a practical way—could someday lead to ultra-secure clouds, and also help certain kinds of institutions that store gigantic amounts of data to protect that data more cost effectively.
Opinions expressed in SMPTE Newswatch do not necessarily reflect those of SMPTE. Reference to specific products do not represent an endorsement, recommendation, or promotion.