<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

Workflow

December 2, 2021

I cannot open my email without being bombarded with adverts and marketing for workflow tools. Nowadays, it seems that every product is growing a workflow tool. It used to be that the asset management system was also the workflow control system in a media facility, but I feel that the trend for massive centralization has reversed and we are now in a much more federated workflow ecosystem. In my opinion, this brings advantages in security, reliability, and data integrity.

Automating a workflow relies on metadata, that is, data about data. A good friend of mine—a workflow specialist—observed that your metadata sits on a ladder of importance. All the metadata you just collected for the current flow has precisely the right importance for the job you are about to do on this step of the ladder. All the metadata on the step above is too abstract and general to be useful. All the data on the step below is too low-level and detailed to be useful. The metadata that you have just acquired is perfect.

A new day arrives, and you find yourself on a different step of the ladder, and about to perform a very similar workflow to yesterday, but the metadata you gathered yesterday is now completely incorrect. As workflows get more sophisticated and more automated, this problem will accelerate. On one end of the spectrum, we could suck all the metadata from every process into a data lake in the cloud and try and make sense of it in the future. On the other end of the spectrum, we could categorize, organize, and filter the data for optimized discrete data warehouse databases, knowing exactly what is in the database on the day it was stored.

The reality is that practical solutions fall somewhere in the middle of these two extremes. Highly repeatable workflows that do not change often are predictable, and the data warehouse approach is very effective, very efficient, and widely used. This could be as simple as a workflow to create the main streaming bundle for a prime-time program.

For highly unpredictable workflows, the data lake approach is effective and efficient, although much of the stored data will never be used. For example, you might urgently need a Korean movie with Swedish subtitles for an airline customer by this afternoon. Nobody knew that a specific query would ever be issued, but you had all the metadata just in case.

A key element that makes all these systems work is the ability to label the metadata to say what it is and what it is describing. In many cases, creating an ontology (i.e., a vocabulary for a specific segment of the industry) can help interchange these metadata values between systems so that the search and workflow results become more predictable. Some artificial intelligence/machine language (ML) enthusiasts have told me that it is unnecessary because ML is good enough to infer the metadata without explicit instruction. In many cases, I agree, but if there is a way to make the systems more reliable, reduce errors and increase interoperability of data, then everybody wins because ML can be used for more sophisticated correlation problems.

The bottom line for SMPTE is that metadata will become a driver for much of the industry in the coming years, and that careful choice of what metadata needs to be written down and made available to the industry in an agreed form will help determine SMPTE’s role in the automated, software-defined future that is already around us.

 

Tag(s):

Bruce Devlin

Bruce Devlin has been working in the media industry for 30 years and is the chief media scientist at Dalet Digital Media Systems as well as the founder of Mr MXF Ltd. and co-founder of the Media Bay LLC. He is well known in the industry for his technology presentations, especially his educational YouTube series—Bruce’s Shorts. Devlin has designed everything from ASICs to algorithms. He tweets as @MrMXF chaired the SMPTE working groups and literally wrote the book on the MXF format. Devlin is an alumnus of Queens’ College Cambridge England. He is a member of the International Association of Broadcast Manufacturers (IABM) and Digital Production Partnership, a fellow and U.K. Governor of SMPTE, a recipient of SMPTE’s David Sarnoff Medal, a recipient of BKSTS’ Achievement award, keen to educate the world about media and a rider of bicycles (occasionally quickly). Devlin is also a recipient of the SMPTE Excellence in Standards award.

Related Posts