<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

Capturing volumetric video

September 21, 2020

Volumetric video is a process that captures images of objects in a three-dimensional space, including people, that can be viewed later from any angle and at any point in time. Creating highly realistic and dynamic 3D models of people is useful in augmented reality and virtual reality applications. However, high-quality volumetric assets are becoming used more often in 2D movie production, where sePicture1-Sep-21-2020-08-12-17-83-PMveral views of an actor need to be rendered in a scene. 

 

An example of volumetric video technology is the 3D Human Body Reconstruction (3DHBR) system developed by Volucap that captures moving 3D images of people without the need for 3d sensors. The system creates naturally moving dynamic 3D models that can then be observed from any viewpoint in a virtual or augmented reality scene.

How the images are captured

The 3DHBR system consists of an integrated multi-camera and lighting system that captures an actor's performance from a complete 360° angle. A cylindrical studio is used to capture the pictures and is equipped with 32 cameras – each with 20-megapixel sensors – arranged in 16 stereo pairs.

 

Unlike other systems that require separate 3D sensors (you may have seen photos of actors with dots placed over their faces and bodies), 3DHBR entirely relies on the images coming from the sixteen stereo pairs of cameras.

 

The 220 ARRI SkyPanels (a compact, ultra-bright, high-quality LED soft light) are mounted behind diffusing tissue to allow for different lighting scenarios that help the captured images to appear more realistic. 

How the process works

Before use, each camera is tested and calibrated for elements such as color correction to ensure all the captured images allow for seamless integration. After that, difference keying is performed on the foreground object to minimize further processing. 

 

The stereo-processing approach consists of an iterative algorithmic structure that compares projections of 3D patches from left to right using point transfer via homography mapping. The resulting depth information from each stereo pair of cameras is fused into a standard 3D point cloud per frame. This is why the actor doesn't need to cover their bodies with dots. The 3D imagery is calculated based on the information collected by the multiple sets of cameras. 

 

Because the resulting mesh per frame is still too complex, a mesh reduction is applied to parts of the image that require less detail, leaving more detail in more critical parts of the image, such as the face. Once this processing is complete, a sequence of meshes and related texture files are available for further processing.

Production challenges

There are, of course, some challenges to using this technique to capture volumetric video.

 

Fast Movements

The recording of swift movements, even individual objects, requires quick adjustments to the set's light conditions. Typically, this would be possible only with time-consuming conversion measures during production. 3DHBR, however, uses a fully programmable lighting system that allows different lighting templates to be loaded at the touch of a button. 

 

Relighting of Actors

For the convincing integration of an actor into a virtual scene, you need the freedom to adjust the lighting after capturing the video of their performance. For example, if the final illumination in the 3D scene was not known at the time of shooting or different light settings are necessary due to unique clothing or movements. A few convenient methods have been developed to transfer the lighting of the 3D environment on to the actors that integrate seamlessly with current post-production tools, such as Autodesk Maya, Nuke, or Houdini.

 

Treadmill

The cylindrical studio used by 3DHBR has an average diameter of 3 meters. This is fine for stationary scenes, but to record an actor walking a treadmill is included in the studio. This allows for the walking movement to be recorded in a single scene, or for a short walking cycle to be recorded and looped, a technique often used in traditional animations.

Grading

The diffuse lighting system used by 3DHBR allows a subject to be lit from any direction, but the texture of the objects being captured can be flat without any internal shadows. Different color gradings are applied to support the best input data for the following algorithm modules:

  • Keying
  • Depth processing
  • Creative grading for texturing

For keying, a high saturation grading is applied to optimally distinguish the foreground object from the lit white background to achieve the best possible representation of structures. For example, particularly dark image parts of an actor are graded brighter. 

A final grading is then used for back projection to recreate a natural skin tone for the actor to add texture onto the final 3D model.

End-to-end encoding

Volucap also developed several additional tools and processing modules for use with 3DHBR. The 3D video processing workflow creates independent meshes per frame, made up of individual topology and texture atlas. The registered mesh sequence is then compressed and multiplexed into an MP4 file and can be easily integrated into the dedicated plugins for Unity or Unreal applications. 

 

On the receiver side, plugins for Unity and Unreal allow for easy integration of volumetric video assets into the target augmented reality or virtual reality application. These plugins include a demultiplexer as well as corresponding decoders that perform real-time decoding of the mesh sequence. 

 

Volumetric video capturing is still a work in progress, but the technology is quickly advancing every year. How directors will use this new technology is also evolving but expect to see some interesting volumetric special effects in upcoming movies and TV shows over the next few years.

 

Tag(s): 3D , Immersive Media

SMPTE Content

Related Posts