<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView &amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

ATC21 Day 1

SMPTE 2021 ATC Schedule

ATC21 Day 1

SMPTE 2021 ATC Schedule

Description

10:00 AM - 11:10 AM

Welcome and Introduction -  What Does Virtual Production Mean? 

 What Does Virtual Production Mean? 

Take a live tour of a virtual production workflow with leaders of the SMPTE RIS OnSet Virtual Production group.  For the first time using the Movielabs visual language toolset, the group will walk through the workflow as represented in the recently published wallchart (October 2021 SMPTE Motion Imaging Journal).  You will learn the lingo and get an understanding of the workflow at a high level, as well as understanding the natural diversions.  Join experts Chris Vienneau (Movielabs), Erik Weaver (ETC), Katie Cole (Perforce), Chaitanya Chinchlikar (Whistling Woods International), Greg Ciaccio (Independent), and David Long (RIT) as they join Kari Grubin in this informational session, followed by a live Q/A period.

What’s SMPTE Thinking about for Video Production Help?

In support of virtual production, in this live session Standards Vice President Bruce Devlin will discuss the various issues around video production and delivery to the web.  Bruce will also share his thoughts on SMPTE’ role as a registry authority, today as well as into the future.  

11:15 AM - 12:15 PM

Technical Session - 1

Proxy Workflows for a Secure Remote Production Future

  • Using lower resolution proxy files for video production has been an option for a number of years, however advancements in technology, remote working demands, and a drive towards remote production and usage of cloud technologies has opened up a range of new possibilities. Modern proxy workflows depend on a set of underlying technologies to provide a seamless, secure, and productive user experience. The paper will identify enablers for remote production and provide concrete examples to illustrate how media professionals: • Overcome challenges of collaborating with high resolution footage • Control access to valuable original footage • Create unique renditions to simplify workflows and increase traceability

Multicam live production in a virtual environment

  • Hardly any movie is made without the use of visual effects (VFX). The power of today's graphical processors allows a lot of the effects to be rendered in real-time, which opens the possibility of recording in-camera VFX. This technique has been used in the making of several recent movies. The TV show "The Mandalorian" uses a large active led wall to project its 3D scenery. This way, the actors, director, and camera operators see the sets in real-time, instead of a greenscreen. Bringing this innovation to the television studio offers several challenges to overcome. The use of a multi-cam setup and synchronous switching of what is displayed on the LED wall or a consistent depth of field to name just two. We have overcome these obstacles with the Ketnet live show "Gouden K's". Besides the big productions, we believe this technique might be even more beneficial in small productions using a limited technical crew. With a second project, VRT wanted to explore the possibilities and limitations of Virtual Studio Production using Game Engine Technology. The software-based solution we've deployed (running on common PC hardware) allows for lots of flexibility, creativity, and high-quality content in real-time. Using Unreal Engine and PTZ cameras, we've built a complete interactive 4-input Virtual Production switcher engine using one main workstation. Both projects will be described including lessons learned from an operational point of view.

12:30 PM - 1:30 PM

Technical Session - 2

Toward Generalized Psychovisual Preprocessing For Video Encoding

  • Deep perceptual preprocessing has recently emerged as a new way to enable further bitrate savings across several generations of video encoders without breaking standards or requiring any changes in client devices. In this paper, we lay the foundations toward a generalized psychovisual preprocessing framework for video encoding and describe one of its promising instantiations that is practically deployable for video-on-demand, live, gaming and user-generated content. Results using state-of-the-art AVC, HEVC and VVC encoders show that average bitrate (BD-rate) gains of 11% to 17% are obtained over three state-of-the-art reference-based quality metrics (Netflix VMAF, SSIM and Apple AVQT), as well as the recently-proposed non-reference ITU-T p.1204 metric. The runtime complexity of the proposed framework on CPU is shown to be equivalent to a single x264 medium-preset encoding. On GPU hardware, our approach achieves 260fps for 1080p video (below 4ms/frame), thereby enabling its use in very-low latency live video or game streaming applications.

ATSC 3.0 as a use Case for Public Safety Communications

  • Fire and EMS services across the United States still rely on paging technology to communicate emergency incident information. The infrastructure for these paging systems is typically owned, operated, and maintained by the local government or agency to ensure coverage includes as close to 100% of the jurisdiction as possible. This paper proposes the use of ATSC 3.0 datacasting technology to serve the paging needs of public safety and uses North Carolina as a test case. This concept could lead to cost-sharing, greater collaboration across jurisdictions, and reduced response times for mutual aid requests. The public deserve the best possible response from the public safety sector and therefore, public safety deserves the best technology available in order to achieve their mission.

1:35 PM - 2:10 PM

Social Session - 1
 
 

2:15 PM - 3:15 PM

Technical Session - 3

Calibrating LED fixtures and Video Walls to the camera's chroma signal

  • I will cover the shortfalls of the current lighting measurements used by end-users and manufacturers to evaluate, calibrate led lights and video walls to match on camera. I will show how an imbalance can occur in the cameras when the focus of calibration is on color meter readings instead of the actual response of the camera's image sensor. I will show the workflow that has been adopted by many virtual studios that use video walls and led lighting to get a harmonious balance between all of these based on the needs of the camera's image sensors response. I will cover the HS Scope and how it guides the end-user to obtain a calibrated setting that is based on the various camera's chroma responses. I will show the need to calibrate for the various cameras and the shortfalls of using global lighting measurements instead of camera-specific measurements. Video walls are being used for VR sets with LED lighting. There remains a diversity of output quality along with a variety of responses from cameras image sensors. You can see the wide range of mismatching that can occur. I have worked with LEDs and video walls for decades and using my methodology I have been able to get harmony between these elements no matter the camera as I used the chroma signal as my guideline. My methodology has now been adopted by some top VR studios globally. Many gaffers have stated my methodology is the missing link they have been looking for.

360 8K Viewport-Independent VR

  • 360 VR has been deployed in the past few years using different techniques. Viewport-independent technology is used on 4K content for delivery to 4K-capable head-mounted displays (HMDs) and smartphone devices, resulting in a disappointing experience. The alternative is using viewport-dependent technology with 8K content on 4K-capable HMDs and smartphones devices, which enables a good experience, but with complexities and limitations in terms of the integration into existing OTT workflows. The 8K viewport-independent technology presented uses off-the-shelf encoding techniques to compress 8K 360 VR content as a single file and to distribute it in CMAF low-latency DASH mode to 8K-capable devices such as the Oculus Quest 2 or Galaxy S20. This paper will present an end-to-end 8K VR workflow, which is entirely cloud based and capable of delivering high-quality 8K VR DRM-protected content compared with 4K content on different devices.

3:20 PM - 4:15 PM

Social Session - 2
 

100 Years of Standards Celebration