<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

Progress With Camera and Lens Metadata

December 14, 2022

The maturation of on-set virtual production techniques has offered lots of new options for content creators, but it has also reinforced the need for the industry to find better solutions to address a long-time challenge that has plagued the visual effects’ world—how best to capture, move, track, and utilize camera and lens metadata as it travels downstream into the post-production process. Jim Helman, CTO at MovieLabs, a member-at-large of the SMPTE Board of Governors, and chairman of the camera and lens metadata sub-group within SMPTE’s Rapid Industry Solutions (RIS) On-Set Virtual Production Initiative, emphasizes that virtual production has put a new spotlight on this issue, but that the general challenge of tracking metadata efficiently from its source downstream through the production and post-production processes has long been a vexing one.

“A lot of metadata is created on set these days, but whether it is camera logs and notes, or script supervisor notes, or detailed camera and lens information—all sorts of things that are very useful downstream don’t get there cleanly,” Helman explains. “When metadata is not readily available where and when it is needed downstream, it is often time-consuming to try and find it, to go back to the last department or individual that might have had that information or know where it is. This has always been especially salient in visual effects. Often, metadata doesn’t make it downstream for VFX use at all, or it makes it down in a form which cannot be trusted, which is not reliable. A lot of times, therefore, such metadata ends up basically having to be back-engineered as best they can from other source material. 

“On-set virtual production basically has the same problem in this regard, but in this case, the process needs to happen in real time. Therefore, you don’t have the option of going back and trying to reverse engineer data from certain images that can indicate what the lens characteristics and settings were. For instance, where the focus distance was set, what the depth of field was, what the lens distortion and shading characteristics were, and so on. You need this information for 3D rendering and compositing in order to match the appearance and bokeh of what the camera captured. Therefore, in recent years, camera and lens data has been identified as one of the most important areas to bring together different efforts and experts from around the industry to develop a critical mass to address this issue. [To do that, they need to define] the necessary camera and lens data and do it in a way that addresses the interoperability problem—that is, how to translate data from different camera and lens systems so that it makes it downstream in a usable form.”

Lots of work has been done in recent years on this issue, but as Helman notes, bringing it all together into “a critical mass” has been the big challenge.

MovieLabs, for instance, as part of its larger 2030 Vision Initiative for building and securing content creation workflows, published an ontology for the exchange of data in media pipelines, which includes a specification on camera and lens parameters as a tool for the industry. Likewise, the Visual Effects Society (VES), as part of its VES Camera Reports program to help the industry figure out ways to more accurately build digital replicas of  real-world environments captured by cameras, has introduced what it calls a Camera Report Interchange Format Specification. That specification was designed to be an open-source system for capturing and moving visual effects camera-related information from capture downstream through the post process. The VES and others also recently published a Virtual Production Glossary. Additionally, various members of the American Society of Cinematographers (ASC) and others have also been producing research on this topic for years.

Helman also emphasizes that a great deal of camera/lens data is available to the industry via the various manufacturers who have published SMPTE Registered Disclosure Documents (RDD) that detail technical characteristics of their tools and how they encode files. Among the RDD documents listed in that index are ones from major manufacturers like Sony and Arri that explain their methods for acquiring, encoding, and mapping camera metadata.

“These RDD’s are quite valuable, but they aren’t standards,” he states. “Standards try to find the best solution to a technical problem and provide a way of doing that for the whole industry. The RDD’s are basically vendors documenting and disclosing how their systems work, and what the outputs of those systems mean. However, many of them are quite different because many manufacturers have different approaches to things.”

In recent years, SMPTE has also maintained a Metadata Registry that helps users identify a piece of metadata and then encode it for carriage. The Registry, which has been updated over the years, “covers a huge swath of metadata,” Helman relates. “It includes everything from codecs and color spaces and some for very specific uses, like digital cinema, that you might be encoding. For example, [the file format] MXF uses a Key Label Value (KLV). A lot of SMPTE specifications that need metadata will therefore register in the SMPTE Registry an identifier for that metadata value and how it is encoded. So, the SMPTE Registry world is big, and covers some camera and lens metadata, including some defined in RDD’s.”

Thus, the attempted coalescence of these types of efforts is an important enterprise, one now well underway thanks to SMPTE’s aforementioned RIS for On-Set Virtual Production.

“The SMPTE OSVP group has created a common forum where experts from across the industry have come together to define the essential camera and lens metadata parameters for both OSVP and traditional VFX, and to define mappings from what the camera and lens makers currently provide in their metadata outputs into a common, usable, machine-readable format that is aligned with a common set of definitions,” Helman says. “The RIS Initiative is designed to get everyone on the same page on what parameters matter most, and then to get into deeper technical detail and specificity on those parameters, to make sure that they are mathematically clearly defined. One important thing about this initiative is that we have all the major camera and lens makers in the room, participating—Sony, Canon, ARRI, RED, Blackmagic, Zeiss, Cooke, Fujinon, and others. We also have experts from both the ASC and VES and people who are doing on-set virtual production every day.

“What we are doing currently is to document this core set of parameters and to provide sample source code that can ingest metadata files from the major camera makers, perform any translations, and output the parameters in a common JSON format. So, the output of the group will be documents and code published on a Git repository. The initial release will cover the essential camera and lens metadata for traditional VFX. Subsequent releases will cover OSVP and camera tracking metadata.”

Making progress regarding the aforementioned “interoperability problem” is the core goal of this ongoing effort, Helman emphasizes. 

“From an interoperability perspective, the Nirvana or best world would be a place where everyone decides to use a common set of parameters, at least in those areas where people agree on what’s needed,” Helman says. “We have initially been focusing on what we are calling the essential parameters, where we can get that agreement. But that said, even where everyone agrees on a need, there can be multiple technologies or workflows with different characteristics that address the same problem and that do not fit a single data model exactly. For example, the methods used by camera and lens makers and how users map those into workflows for VFX and OSVP can and do vary. And, of course, everything is evolving.

“So, I don’t think we will ever get to a place where everyone is using a common set of parameters that addresses all their needs. But even if we don’t get to complete commonality, at least we can get to a ‘base set’ that can make many things better for people downstream. Then, if you want to build on top of that by having additional or replacement parameters that you are able to capture and use, that’s great.”

Helman hastens to add that this initiative is not, per se, an effort at standardization for the handling of camera and lens metadata, but rather, it is an effort to make things easier for the industry right now whilst laying a foundation for any areas where standardization might be appropriate in the future.

“The OVSP RIS initiative is distinct from the standards development process within SMPTE,” he explains. “But we are gathering and laying the groundwork to enable others to build on what we are doing. For example, a camera manufacturer might provide an option in their software tools that would let export data out in this format, or a virtual production rendering engine like Unity or Unreal might map some of these parameters into their virtual camera models. And where standards are needed, the RIS work can serve as an incubator or onramp to the standards’ process.” 

Meanwhile, Helman expects that as the industry strives for interoperability in this area in the coming years, software will play a major role.

“We could get a SMPTE standard that documents this metadata and how it should be encoded,” he says. “But I also think the Academy Software Foundation and the work being done on the VES Reference Platform could also lead to implementations in open software that become an important part of the common infrastructure that supports our industry. Software is often the driving factor in these cases, and so, some aspects of what that software does may eventually need to be incorporated into standards. That has happened in other areas in the past, such as with Open EXR [an open specification, now a project of the Academy Software Foundation, originally created as part of a suite of VFX software tools at ILM], part of which were documented in a SMPTE standard for ACES. For interoperability reasons, having some [aspects of certain open-source projects] codified into SMPTE standards can be useful. I think that type of model is something we are moving toward more and more.”

In fact, he adds, that’s one reason the outputs of the OSVP RIS project for camera and lens metadata includes software for interpreting different camera maker formats and translating them into common formats.

“Maybe that will become an open-source project that has a life of its own, or maybe it will just be a tool for helping us make sure that we are documenting things on a solid foundation,” Helman says. “But I will say that for interoperability, some kind of standardization will probably be helpful within the area of camera tracking metadata, especially when it comes to on-set virtual production. It’s kind of a Wild West right now with no standards for how information gets transmitted from the devices capturing it to the rendering engines. That is one area where there is a real opportunity for standards—on the protocol side to get this tracking metadata transmitted to the rendering engines in real time using standard protocols. At the wire level, standards can do a lot to improve interoperability, but it all starts with first agreeing on the metadata, which is what we’re focusing on for now.”

Tag(s): Featured , News , metadata , RIS

Michael Goldman

Related Posts