Hot Button Discussion
AI Pushes VR Forward
By Michael Goldman
Kevin Cornish, founder of the Moth + Flame VR content creation studio, is considered a leading director of cinematic virtual reality (VR) entertainment content, yet it was only three years ago that he had his very first experience creating VR product.
“I shot an experimental 360 piece for Taylor Swift,” he recalls. “That was my first experience with VR in 360 degrees. At that point, a 360-degree camera involved [cobbling together] six GoPros. But that was just Beta-testing—it never went live. The reason was at that point, camera quality was not good enough to produce real A-list material.
“Now, however, I feel comfortable that [the technology] is ready. That is a pretty remarkable jump in terms of [image quality capabilities in realtime such as] color quality, the ability to capture shadows, dynamic range, and so on.”
Since that time, Cornish has produced several VR projects, including two that were presented along with 30 other projects from various filmmakers as part of the Tribeca Film Festival’s Immersive Virtual Arcade earlier this year. His projects included Remember Remember, in which users experience an alien invasion, and Fall in Love, a new project in which users converse with non-playable characters to illustrate how one can fall in love with a complete stranger. This project was shortlisted for an Innovation Award at Cannes.
Cornish is hardly alone across the landscape. Tribeca and other festivals featured VR exhibitions and panels during the past year. NAB 2017 offered the Virtual and Augmented Reality Pavilion to attendees, and co-sponsored a Virtual Conference in partnership with the USC Entertainment Technology Center (ETC). And recently, the new Technicolor Experience Center (TEC) debuted as a creative laboratory in which filmmakers and artists can experiment with VR-style filmmaking and produce new, immersive content. For example, the TEC and ETC both helped filmmakers Christine Berg and Simon Shterenberg to produce Wonder Buffalo in the past year. That experience was a hit at Sundance and SXSW—essentially a companion “extra” piece of VR content for audiences who had already watched a live-action short film of the same name about a teenager fantasizing about being a superhero. Another Sundance and Tribeca VR entry was called Tree, from filmmakers Milica Zec and Winslow Porter, which allows users to experience life from the point of view of a tree in the rainforest.
These are just a few examples of the sudden creative explosion in the VR realm, where content creators are striving to create high-end, cinematic VR content experiences far beyond original videogame applications by increasingly employing sophisticated artificial intelligence (AI) algorithms to permit users and digital constructs to engage in highly believable interactions.
For the filmmakers, these initiatives are all about “making the user a first person—placing them in the center of the story,” according to Cornish. “We can make [such presentations] more interactive, but what do we do with that interactivity to make [the experience] more immersive and emotional for the user?”
That question is at the heart of these early efforts, but the new medium is still in the embryonic stages—“pioneering time,” in the words of Nick Mitchell, vice president of immersive technology at TEC. TEC, based in Culver City, California, is a newly-built Technicolor facility that is still expanding. Its purpose is to bring Technicolor’s various production and post-production units into contact with artists and partners from the worlds of gaming, animation, computer science, and academia. This allows experiments with VR, augmented reality, and mixed reality possibilities, developing new tools and methods, and educating the industry on the new medium’s potential for entertaining consumers.
“We are constantly supporting the exploration of new, immersive technologies, and working on artificial intelligence concepts and deep learning concepts specific to [VR],” Mitchell explains. “At this point, we have about 300 researchers working on everything from scriptwriting concepts to production design to virtual production.”
Like Cornish, Mitchell’s involvement with VR began only a few months ago when he moved over to the facility from Technicolor’s digital cinema joint venture with Deluxe.
“About 11 months ago, I put on an [Oculus Rift] VR headset and had that ‘aha’ moment, which is what got me to take this job,” he explains. “In our industry, when you find someone who has significant experience working in VR and AR, in many cases, they have been doing it for maybe two years. At this point, that makes them experts!”
In other words, traditional industry creative groups, artists, and engineers have the skill sets and expertise to rapidly advance the medium, but they first have to come “to understand the impact that [VR, AR, and AI technology] may have without actually being able to tell them what the specific impact is going to be,” he adds.
And so, Mitchell, along with his colleagues, is challenged to educate content creation industry artists and engineers on why the new medium is worth their time and expertise. After all, “VR has existed on a different plane of operation than many of the broadcast and cinema technology people,” Mitchell emphasizes. “It’s a matter of engaging people who have solved many of these problems [for other forms of cinematic and broadcast content], getting them into a VR headset, and helping them have their own ‘aha’ moment.”
The hope is that such expertise will eventually assist in resolving a series of challenges in producing and distributing VR-related content. These challenges include making sometimes bulky head and body equipment—the user interface and various controls—less invasive, and helping users adjust to initial “warm-up periods” and, what Mitchell calls, “the giggle factor” in which some users become overwhelmed before they start responding to cues embedded in content. There is also a learning curve associated with how best to handle resolution, compression, latency, optical flaws, and many other challenges.
At the heart of solving these challenges is the ongoing development of AI tools that can make VR systems more intuitive. But just understanding conceptually how AI technology can enhance and expand a VR experience is a challenge, Mitchell adds.
“An AI system can do everything from driving a non-playable character—a character you might experience [in a VR production] and giving them personalities and specific actions and reactions in their interactions with you,” Mitchell says. “This is all driven by an artificial intelligence algorithm. Some people call this ‘machine learning’ or ‘deep learning,’ and these terms get thrown around a lot. Deep learning is the evolution of artificial intelligence, where we look at huge masses of data to infer how a behavior might incur.”
He adds that some of the concepts and technical possibilities are reaching VR developers from applications now rolling out for consumer products. These include deep learning algorithms and concepts driven by companies like Google, Intel, IBM, and Facebook, among others.
Mitchell says that software companies have made large investments in building this infrastructure. IBM’s AI ambitions include the Watson cognitive computing initiative, and others companies have released intuitive ‘assistant’ technology for [consumer electronics applications], such as Amazon’s Alexa, Google Assistant, or [Apple’s] Siri, among others.”
Regarding headsets, Mitchell and Cornish both predict the industry will eventually evolve to “a pair of glasses that are not too dorky looking,” as Mitchell states. “You could wear them as glasses but, with the flip of a switch, go to full immersion.”
Cornish agrees, suggesting that “all the cords and sensors in an [HTC Vive] or [Oculus Rift] headset now might be rendered unnecessary because your phone would track, based on the camera in the phone.”
Cornish points out that certain technologies used to create content in this space are now readily available to creatives. Stickier challenges remain in terms of the display technology. He states this is especially true regarding “intuitive input devices,” meaning a better user interface.
“Interacting with buttons is not very intuitive,” Cornish explains. “It’s rather abstract what the ‘A’ button or the ‘B’ or the ‘X’ button does. On the other hand, if you interact based on your head movement, that’s more akin to the way human beings were designed to interact with their world. So we are working on improving the concept of ‘gaze activation’ with these systems—the scene changes for the user depending on where he looks. They are also exploring the use of voice as input—building stories in scenes, based on what you say to change the scene. The back-and-forth of the conversation many of us see as the Holy Grail of a VR experience, driven entirely by a human connection.”
Thus, for purposes of entertainment, combining technological development with creativity is extremely important. Mitchell anticipates that eventually, “algorithm and code writers will be needed on creative teams” for this kind of work.
Moreover, Mitchell points out that “improved artistry” will be the focal point of any believable high-end VR experience. In this sense, it is not just how intuitive the non-playable characters act in terms of personality and decision-making, but also the authencity of the most detailed aspects of their appearance and physical responses to the interaction.
“We have people studying behavioral patterns to try and get things right, like getting micro expressions correct in facial movement,” Mitchell explains. “[Artists] have to understand how important it is for blood to flow to the surface of the skin in a certain way when the face moves, for example, as opposed to the [less detailed] way things have been done in the past.”
He emphasizes that this stage of experimentation in VR is more important, at the moment, than the industry’s concerns with achieving specific standards on designing and presenting VR content. The main topics in the standards area, he says, are issues on compression and delivery, as the size and depth of the content keeps evolving.
“[Compression codec] H.264 is popular right now among the VR platforms for 360 video delivery,” Mitchell says. “But it isn’t great. Coming from the cinema world, dealing with JPEG 2000 files at 250 megabits per sec for 24 fps at 2K [2048x1080] and 4K [4096x2160], I’ve had to deal with a big quality hit moving to VR. Here, we are working with or moving toward a world where people want anywhere from 30 fps to 120 fps playback at UHD or greater resolutions, often in the 1:1 aspect ratio [3840x3840], with many common tools imposing harsh limits on the potential bit rate of an H.264 file to conform with the broadcast specifications they were originally designed to support [as low as 50 Mbits/sec]. In terms of quality, we are compressing a lot more pixels at a much lower bit rate, which is not ideal. When we deliver to these platforms at those rates, between 50 and 150 megabits per sec, they then get transcoded. So you start with a compressed file, and then it often gets compressed again before it goes out as an H.264 file, matching the requirements of the device. That’s why the imagery doesn’t always look great in some cases. But we have identified this. Many of the platforms are starting to accept ProRes files, and we are actively working on initiatives to improve MPEG.
“On the packaging and delivery side, SMPTE has solved a lot of problems for the industry with IMF, which is a standardized generic broadcast media container and package framework supporting extensible metadata sets. And I think 360 video seems like another opportunity for us to leverage all the good work done there.”
Still, as Mitchell points out, although “it’s very early” for the virtual reality industry, the train has left the station and it isn’t coming back. As a result, the two men say the medium is already infiltrating the larger culture as a regular entertainment option more profoundly than we may realize.
“One of the most exciting things I believe will be happening within the next six months or so is location-based VR [arcades] opening up,” Cornish says. “I think these places will be somewhat like a ride at Disneyland, but instead of happening at Disneyland, you will go to a local movie theater for the VR experience.”
Such developments beg the question, if this burgeoning industry is seeking to be more “cinematic” and is luring people and concepts from the cinematic entertainment universe into its orbit to create content and solve various technical and creative challenges, is it, in fact, a form of “cinema” at the end of the day?
Cornish and Mitchell believe it is, in the sense that the whole point behind the experience is to arouse the viewer’s/user’s emotions in particular ways.
“This deeper level of immersion is about stepping away from the world outside and losing yourself in a more cinematic world,” Cornish says. “There are a varying range of [VR experiences] that are more utility-based, but we are talking about experiences that are emotion-based. And stirring emotion is, of course, the feature that is the most intentionally cinematic aspect of VR.”
The latest developments in the VR world will be examined during a day of programming at the upcoming HPA Tech Retreat UK. A special HPA webcast, with Mitchell participating, previewed that event on June 20.
Could VR Screens Replace TV's?
As explained in a recent TV Technology article, a survey called “Merged Reality” from technology company Ericsson suggests that the reach of virtual reality technology may have the potential to impact how video and broadcast content is consumed by the public. The survey indicates that a significant number of early adopters of VR products are already shifting their viewing habits from traditional broadcast viewing onto VR screens. It suggests that watching video content may eventually surpass gaming as the primary application for VR headsets and that many people believe they will eventually watch movies and television routinely while wearing a VR headset. If that plays out, the article indicates a key consequence could be the impact to television sales, as some households may opt to forego a physical screen in their homes, preferring to “wear” their screen instead.
AR on the iPhone
The term “augmented reality” is not as well known as “virtual reality.” However, according to a recent article in the Washington Post, AR has become a big focus of several technology companies, including Apple. The article states that Apple unveiled a new AR initiative at its recent Worldwide Developers Conference. An analyst quoted in the piece says the company is attempting to illustrate at least 10 potential AR applications using iPhones in the future. They range from gaming to retail, job training, facial recognition, medical diagnoses, and emergency response, among other facets. AR, of course, primarily involves using a device with a screen to project representations of digital objects into the real world. The article states that Apple has already demonstrated how to use an iPhone to overlay an interactive game board onto an actual table.
HPA to Honor Larry Chernoff
The Hollywood Professional Association (HPA) recently announced the organization will give its most prestigious award to a well-known, longtime industry veteran—Larry Chernoff—at the upcoming HPA Awards event in November. The HPA Lifetime Achievement Award, which is not presented every year, is given to people “recognized for service and commitment to the professional media content industry,” according to the HPA announcement. Chernoff is a familiar name to those who work in the post-production industry. He was the founder of the Filmcore, Encore, and R!ot post-production facilities, and served as an executive at Ascent Media for years before joining the board of directors at MTI, becoming that company’s CEO in 2005. The HPA Awards event also includes a special award, presented to an industry professional in the category of Engineering Excellence. Honors will also be presented in 12 creative categories, ranging from editing to visual effects, sound, and color grading.