Automating QC for Language and Captions
The views and opinions expressed in this article are those of the author and do not necessarily reflect the position of the Society of Motion Picture and Television Engineers - SMPTE.
This paper was originally presented by Drew Lanham and Ken Brady at the 2013 HPA Tech Retreat, February 18 – 22.
For IP-broadcasting, Advances Coming in QC Methodologies for Captions, Descriptions, Languages, and More
On a digital broadcast landscape littered with multiple channels, platforms, devices, formats, delivery options, and most importantly, raw data, the issue of how to most efficiently apply closed captions, subtitles, languages, and other metadata to different versions is a problem that has grown exponentially in recent years. Literally millions of files, possibly billions, directly related to online broadcasting applications are traveling through servers and into consumer homes each day, with individual assets potentially having dozens of mezzanine files associated with them for different versions and languages. Now, the FCC has applied stricter rules regarding the captioning of Internet video, directly impacting the requirements for distributing such video over IP networks.