<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView &amp;noscript=1">
NAB Show 2023

ATC21 Day 3

Day 3

6:00 PM - 7:00 PM
Technical Session - 4


 Infrastructure as Code at CBC/Radio-Canada's Media-over-IP Data Center

Speakers: Sunday Nyamweno - Carl Buchmann - Patric Morin

Network infrastructure provisioning is a critical subset of the Automated Deployment of CBC/Radio-Canada's Media-Over-IP Data Center. As the On-Air date approached, it became clear that in order to respond to the fast changing production needs, traditional network provisioning methods were inadequate. As we began to explore various automation solutions, we realized that a new NetDevOps culture needed to be cultivated within our organization. This paper will present the architecture and implementation of CBC's network automated deployment workflow in collaboration with Arista. We believe these tools and methods will be applicable as a way forward to many media-over-IP projects at all scales.


5G Media Streaming And 5G Broadcast for Delivery of DASH/HLS Services  

Speaker: Christophe Burdinat

  • Beyond the spectral efficiency and throughput improvement for Enhanced Mobile Broadband (eMBB), 5G brings many new features targeting vertical applications. To leverage new 5G features and capabilities for media distribution, 5G offers the 5G Media Streaming Architecture (5GMSA). It supports a full set of collaboration scenarios between third party content providers and mobile network operators with various degrees of integration adapted to the OTT ecosystem. While the first version of 5GMSA focuses on media delivery over unicast, Multicast/Broadcast in 5G is one of the key new features currently specified by 3GPP and expected for Release 17. Integration of 5G Multicast/Broadcast capabilities within 5GMSA is essential to scale up the network capacity for linear contents. With the publication of DVB-I in 2020, DVB allows for the delivery of linear television services to internet connected devices over broadband and broadcast networks. As an access independent service layer, DVB-I becomes a strong candidate for providing a converging service layer for 5G. This paper will explore the growing capabilities offered by the 5G Media Streaming Architecture; how it the multicast/broadcast capabilities will be integrated; and how it will enable the delivery of DVB-I services.
7:15 PM - 8:15 PM
Technical Session - 5

8K Camera System with Multi-plane Phase-detection Autofocus

Speaker: Kodai Kikuchi

  • We propose a phase detection autofocus (PDAF) method for three-chip 8K UHDTV2 240-fps cameras that detects the in-focus position of a lens by using disparity information between pixels with different apertures across multiple sensors. This multi-plane PDAF method executes pupil division in the same incident position while using simple metal-shielded pixel technology, which can achieve accurate AF and enable low-cost implementation. The proposed method implemented in an 8K three-chip camera incorporating 8K sensors with half-shielded phase-detection pixels into the blue and red channels demonstrated robust disparity calculation compared with the conventional single-chip metal-shielded-based PDAF method.

The Problem with Timecode

Speaker: Robert Loughlin and Derek Schweickart

  • Why do we have timecode at all? We take it for granted, but what's the big deal with timecode anyway? Before timecode, film editors used Edgecode (Kodak - 1919) to identify individual pictures on a film reel. With the increased use of videotape in the 1950s, this editing became more of a challenge because there was no way to visually see what frame you were on when trying to make an edit, so tools were developed to assist with this.


8:20 PM - 8:40 PM
Social Session - 4

Casual Networking

Join your colleagues during the APAC Session for some light conversation and discuss what you've learned at the event so far.


8:45 PM - 10:15 PM
Technical Session - 6

Live Production System to Handle Video Signals with Various Aspect Ratios

Speaker: Yoshitaka Ikeda

  • With the aim of providing a service that allows viewers to watch more attractive TV programs using various aspect ratios freely selected according to the creator's intent, we studied the system requirements and specific transmission methods for live production in broadcast stations. We proposed the use of a 16:9 active video area in conventional video signals such as HD/4K as containers, and determined the range that the user-specified aspect ratio video occupies in the container using identifier ancillary data. The advantage of this method is that it is highly compatible with existing systems and conventional workflows. We also considered the scalability using multiple containers.

Automatic essence timing alignment in IP production

Speakers: Geoff Bowen and Andy Rayner

  • This session will look at the existing timing mechanisms baked into IP media connectivity standards and specifications and how these are typically used in system deployments. We will look at these alongside the ideal timing requirements of a full system. As more of the production chain becomes virtualized, these timing requirements need to be able to be realized in both broadcast appliance infrastructure and virtual server infrastructure (local or cloud based). We will explore some of the technical work still outstanding to realise end-to-end time-aware systems that can deliver the true potential of fully 'time savvy' solution architecture. Fundamental to this work is the concept of timing planes in a production workflow and how these domains are managed in an end to end system.

Super Resolution Application - Content Modernization

Speaker: Adam Mitchell

  • At Warner Bros, we have decades of digital-native content in old standards - lower resolution, higher framerate, interlacing, etc. Bringing that content to modern expectations is currently a manual, labor-intensive process. We are using machine learning techniques as an alternative to that process, which will increase the breadth of content we can bring to streaming audiences by decreasing cost and turnaround time. Our in-development process executes multiple algorithms for each step (upres/frame-interpolation/interlacing) and measures the quality compared to a ground truth via objective algorithms.