Newswatch e-newsletter

Current Issue - October 2016

SMPTE Newswatch Masthead

Hot Button Discussion

Using the Cloud for Production
By Michael Goldman  

It’s currently an exciting time for media professionals using Cloud-based computing services for content production and post-production work. That’s because Cloud tools and techniques for particular services have arrived at a point in history where they have become practical and affordable options for content creators. Simultaneously, the technology appears to be showing great potential for making certain new applications available to media professionals, as well. At least that’s the view of Richard Welsh, a longtime SMPTE standards contributor and participant and CEO of the UK’s Sundog Media Toolkit, a company that offers a variety of Cloud-based post-production software services for the feature film and broadcast industries.

Once Amazon Web Services launched in 2005 as what he calls “the first big commercial Cloud,” Welsh says the media and entertainment industry immediately became interested in figuring out ways to use the Cloud as a commercially viable tool for providing certain types of media services. However, he adds, it wasn’t until recent years that the industry found its first niche on the content side with the Cloud—by using it to provide transcoding services, an offering he says “that has really gone from zero to 100 mph” over the last five years. Other applications soon followed, and more are vigorously being pursued daily, he adds.
 
“I would say [transcoding services] were the first wide commercial use by the media industry of the Cloud, partly because content was less sensitive for that kind of work in many cases than, say, a big Hollywood movie back then, and partly because it certainly offered economy of scale, which the entire business depends on,” Welsh explains. “That kind of work did not involve the same concerns over bandwidth and security as a feature film had, because you were not dealing with such longform and large content, nor were you dealing necessarily with high value pre-release content. So it made a lot of sense at the time to move to a commodity infrastructure model, and an operational expenditure model with those types of services.

“Then, the next really obvious application, the first one that started pushing into the production and post-production area, was visual effects’ rendering.”
 
Welsh explains that the Cloud rapidly evolved into a logical solution for rendering work on certain projects involving facilities of modest size and scale because it simply didn’t make economic sense for such facilities to have their own massive render farms on site.
 
“Typically, there are a handful of very large VIFX companies in the world that have very large render farms,” he says. “They can handle very high-end post-production. But that is not to say that anyone else in post-production does not want to access that same quality. And so, a very early stage application for the Cloud in production was visual effects rendering. One early company that offered this service was called Zync, and they offered normal render engines you might see in any physical facility, allowing you to do the same kind of work in the Cloud. They [started with] Amazon, were subsequently bought by Google a couple years ago, and now offer that service in the Google Cloud. Now it is possible to do that kind of work safely and practically in the Cloud.”

The words “safely and practically” are key, Welsh adds, because since those early days of the media industry’s utilization of Cloud services for creative work, those two concerns have become less vexing. In terms of security, even now in the era of piracy and data hacks, he suggests “people’s understanding of how to use security in the Cloud has improved” because content creators now have “a framework to assess the levels of security their service providers are giving them.” That’s due in large part, he says, to the Motion Picture Association of America’s (MPAA) 2015 update to its Content Security Program, which includes a specific set of Application and Cloud/Distributed Environment Guidelines to apply to companies that offer Cloud-hosting products and services.
 
In terms of practicality, meaning the availability of stable Broadband connectivity strong enough to support the constant back-and-forth streaming of high-end data, he adds that “the ability to directly connect to the Cloud via various infrastructure, from general Internet connectivity to Telecom providers, is becoming more and more commoditized” these days.

“In terms of prices for industrial scale bandwidth or connectivity, there has been a race to the bottom [in recent years],” he explains. “Telecom providers have lately recognized that they now need to provide flexibility in such services—there has been a real shift in their approach. There are totally media-centric providers like Sohonet and EU Networks, and others that are engaged in a flexible connectivity model.
 
“So once we achieved the comfort level with security, and the availability and flexibility of bandwidth, we knocked down the big hurdles to using the Cloud [for media content applications] because, essentially, all the rest of it is about computers,” he says.
 
And that is where managed service providers come in. Some aforementioned applications, like transcoding and visual effects rendering, along with, more recently, mastering applications and production dailies services, are now readily being offered across the industry by providers who “can provision various pieces of software that you need, or set out and try to create ecosystems for clients,” he says.

Such companies, like his own, offer third-party software “within our system, which you can then glue together within the user interface, putting together tools from different manufacturers,” he says. “So our take is to bring to the table a whole load of different services through the Cloud. But, by no means does anyone offer everything end to end. There are still gaps, such as realtime applications, which are still very difficult to provision from a Cloud environment. You can’t really have a full-blown, 4K, realtime color grading suite with uncompressed picture data, where all that hardware is in the Cloud right now, while seeing the picture locally in realtime. But I can’t say that will be true forever. The more things go up there, the more general glue that exists between the services, then I think it will be incumbent on manufacturers to provide virtual versions of their current hardware offerings.”  
 
For the time being, Welsh suggests the Cloud will continue to fit into modern workflows in one of three different ways. For some, it will continue to function as “a generic Cloud—a computing resource with some storage and RAM and memory and processing power available to you, but you will still run your software locally, such as with editing suites and things like that, being able to remote into the machine itself with point-to-point between that machine and wherever you are, with processing living in the Cloud.”

“Then, there is a hybrid of that model where you physically put hardware that you may previously have had in your facility into a data center, so that you can then have Cloud storage, with virtual software services, and some of your own hardware for stuff that doesn’t exist yet as software right now. Some people are putting such hardware units into data centers in order to allow themselves to integrate into a Cloud workflow.”
 
“The third model is a pure software-as-a-service model (SaaS), where the computer resource and the software itself are all truly Cloud distributed, so that they are not locked to any single machine or any single seat license—a pay as you go model that scales up and down with your access to it typically being through a Web interface, rather than through a virtual machine remote type interface. It looks like the SaaS model is becoming more dominant, except for particular areas, like editing, which is an area that typically needs an output machine model—a generic computer resource in the Cloud. But the SaaS model is more highly virtualized, and allows you to spread work across essentially an infinite number of machines.”

For now, for most users, he suggests that, for the foreseeable future, the Cloud will function as an increasingly important “piece” of a larger post-production infrastructure.
 
“Typically, many will find themselves bouncing up and down between the facility and the Cloud,” he says. “Some parts of the workflow will be in the Cloud, and other parts on the ground. As more of the work moves into the Cloud, and your data spends more time living there, I feel that will be the point where all the other services will have to move into the Cloud, because it will become onerous to come back to the ground to do that work.”   
 
Further, Welsh says, new ways to fit the Cloud into the creative process are being innovated across the industry daily. Indeed, on the heels of this year’s IBC trade show, he strongly believed that more applications for Cloud-based technologies are now possible than anyone could have imagined even a couple years ago.
 
One area where Welsh says he saw tremendous growth this year was in the storage and content management arena, for example.

“It’s not necessarily as simple as being able to throw all your media into a Cloud, because there are a lot of pitfalls,” Welsh says. “For instance, if you have uncompressed data, and you need to move that very quickly to a single machine, you can’t just use any generic Cloud storage—you need specialized storage. We are now seeing classic high-performance storage vendors starting to effectively offer Cloud models where they provision storage in the Cloud that mimics their high-performance hardware storage offering.
 
“I’m also seeing a big explosion in asset management. This may take the form of a kind of orchestration of media across multiple services and platforms. It might be as simple as asset management and search capabilities, but it might also extend into really big trend areas like big data. This involves things like ‘audience analysis’ or ‘contextual analysis’ of your content, and then, applying that data back. There are companies like Zadara Storage and others who are now offering storage as a Cloud service, and other Cloud-based managed service providers like SDVI and Base Media, who are gluing together applications and infrastructure and exploring these areas.”
 
In particular, Welsh suggests the area of contextual analysis of media is “one of the most exciting parts of the Cloud right now in terms of overarching media management.”

By contextual analysis, he means that information like “names of actors, details of a scene, locations, settings, time of day, what brands are in a scene” can be tracked and then assessed against audience reaction, “and then used for all sorts of purposes, like marketing, creating new versions of content, alternate endings, and so on.”

Another cutting-edge application involves accessing artificial intelligence tools through the Cloud, according to Welsh—what he calls “the use of machine learning and artificial intelligence for media applications.”
 
In this category, he says companies like Video Gorillas are working to make AI tools accessible via the Cloud to film studios, broadcasters, and other media companies to aid creative agendas. This is an example of “the Cloud helping to make certain computerized processes possible [for media companies] that would not be possible for them in any other environment.”
 
“Artificial Intelligence could help in terms of cost, accuracy, and quality,” he says. “If you could teach an AI engine to accurately recognize brands present in scenes, then it will eventually learn to see them quicker and better than a person. Brands are typically based on recognizable logos and shapes, so a typical viewer may miss that. Machines may be better for that kind of thing. Eventually, that can become a commoditized service to help studios and broadcasters with marketing opportunities and to give audiences extra data about what is going on. It could help with Blu-ray menus to online delivery to appraisal of content to search engine optimization and so on. Right now, such things are laborious and expensive. But the computerization of them with an artificial intelligence approach could become very big eventually, and the Cloud can help facilitate that since it would require an intensive computing environment that is able to scale.”

A less clear future for how the Cloud might aid the creative agenda involves the front end of production—shooting imagery and pushing that data directly from camera head to the Cloud. This is an application currently being pursued by the industry more for ENG-style news applications than filmmaking, of course. But for narrative production, Welsh says “we are currently seeing the first signs of cameras with direct wireless connectivity, which can effectively be used to uplink to the Cloud,” in various ways. Still, for now, such efforts mostly revolve around the notion of uploading proxy images via WiFi connectivity, not the use of the Cloud as the primary capture medium.

“For now, to be sure, everybody is still recording locally and having that data handled by a digital imaging technician, as usual, even if they are uploading proxies simultaneously,” he says. “That traditional workflow remains. They are able to deliver proxies straight from the back of the camera over WiFi and up into the Cloud, but you would have to be pretty adventurous to try and send raw data that way at this stage. I would imagine that in about two years, we might see as a common accessory from all main camera manufacturers either third-party or their own wireless connections, with something like a bonded cellular approach [as some ENG cameras are utilizing these days for newsgathering purposes] to allow them to upload effectively immediately to the Cloud. But I don’t think we will see them doing that without procuring a local, physical backup for quite some time.”

Meanwhile, as these developments continue, Welsh points out that SMPTE and others are examining where and how new standards work will need to be done on Cloud-related technical issues. In fact, he reports that a new SMPTE drafting group, dubbed 30MR, the Cinema Content Creation Cloud (C4) Identification (ID) System, was launched shortly before presstime, with a mission to draft a standard for a unique identifier specifically targeting Cloud applications for media production. It’s a process just under way, building on work done at the USC Entertainment Technology Center to create the Cinema Content Creation Cloud (C4) universal identification system for media production data. The drafting group’s work currently has no official timeline, but Welsh hopes a standard will publish some time before early next year.  
 
“The idea is to develop a standardized way to identify content that travels through the Cloud, to verify that you are dealing with the same piece of content all the way through, that it is unique,” he says. “Current systems have problems with uniqueness, or lack thereof, causing clashes and security holes in their algorithms, so SMPTE is working on that problem right now.”

News Briefs
Examining Preservation Issues

In advance of SMPTE’s 2016 Annual Technical Conference & Exhibition this month, the StudioDaily site recently published an interview with Andrea Kalas, president of the Association of Moving Image Archivists (AMIA) to preview the Conference’s daylong symposium on restoration and preservation issues. The symposium, dubbed “The Future of Storytelling and How to Save It,” was slated to bring together a range of industry experts to discuss the latest industry trends and concerns as it relates to preserving and restoring both analog and digital motion picture assets, and also how the art of restoration is fundamentally changing in the digital era. In the interview, Kalas pointed out that among the many new challenges facing archivists is the plethora of new formats now available to content creators, and how to sort and archive massive volumes of assets from video game projects. She said the symposium and future events would also focus on the challenges of applying high dynamic range and other expansions of color and other image elements to older content during the restoration process.

Inside Ang Lee's IBC Screening
The filmmaking industry buzzed this month with the release of Ang Lee’s new movie, Billy Lynn’s Long Halftime Walk, due to the fact that it was shot, and is being released, in stereo 3D at 120 frames/sec at 4K resolution. Some reviewers, such as Variety’s Owen Gleiberman, are calling the project “a bold technological leap” and “a revolutionary film.” Central to the success of what Lee attempted in terms of presenting audiences with immersive imagery that Gleiberman calls “a much higher and more granular level of visual data packed into each and every frame” involved making sure not only the production method, which involved revolutionary use of Sony CineAlta F65 camera systems under cinematographer John Toll’s watchful eye, was exacting, but also the screening methodology. In advance of his upcoming feature article in the December issue of American Cinematographer magazine, writer Benjamin B recently posted on his blog an inside look at the IBC screening of the movie that he attended just as it was hitting the film festival circuit. Benjamin delineates details of the Dolby 6P 3D dual laser projector system used at the screening, as well as some of the other technical innovations, and the impact the visual innovations, in his view, had on the viewing experience. SMPTE provided a first look of the film, in its native format, at the 2016 NAB Show's Future of Cinema Conference back in April as reported by The Hollywood Reporter.

Spectrum Auction Finishes Round
A recent article in TV Technology says the ongoing FCC spectrum auction is now moving to a third stage, after the second round of the process opened and closed on October 19 after just a few hours and a single round, raising far less than originally anticipated. The article explains the first stage of the auction began in late March and lasted until August 30, with 27 rounds of bidding, but yielding only $22.5 billion dollars, far less than broadcasters had hoped to garner for their first target of the 126 MHz spectrum. The second stage targeted the 114 MHz spectrum and raised $21.5 billion, less than half of what was anticipated. That means still another stage at a lower targeted frequency of either 108 MHz or 84 MHz will now have to be held, according to the FCC’s original plan. The article contains some industry analysis as to why the auction has not, thus far, met original targets, and what analysts think might happen next.