<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

Machine Learning Delivers a Breakthrough in Color Correction

April 12, 2021

In the April issue of the SMPTE Motion Imaging Journal, we explore using machine-learning to color-correct specific brand colors pixel-by-pixel on the fly. In our brand-centric world, this technology has enormous potential in a wide range of applications, especially the profitable arenas of live sports and advertising. Whether it’s the red in a Coca Cola ad or the exact shade of a Cincinnati Reds uniform, the precise color is a highly valued brand asset.

The visual consistency and display of correct brand colors is a complex matter, impacted by input from perhaps dozens of cameras with different lenses and angles; the weather, time of day, and mix of natural and artificial lighting at the event; other physical variables such as sweat on jerseys, light and shadows; as well as different screen displays, reference monitors, and even the perception of the color-correction specialists themselves. In addition, current color-correction solutions with manual adjustment of the brightness and amount of red, green and blue in the image impact the entire frame, shifting not just the desired color, but also the surrounding colors of skin tones, the opponent’s uniforms, sky and grass.

To address the challenge of localized color correction, the Clemson University Research Foundation assembled a diverse team of undergraduate to post-doctoral technologists, with backgrounds in computer engineering, industrial engineering, bioengineering and physics, as well as feature film production and marketing. Their combined creative and technology expertise allowed them to address both the art and science of the project. The result is a prototype software, dubbed ColorNet.

ColorNet is based on machine learning algorithms that recognize patterns from an initial training sample data set, analyze them and develop a model to address trends in the data. In this case, the sample set included thousands of frames of both color-corrected and raw footage featuring football uniforms in the iconic Clemson orange. The neural network was trained to target this specific brand color only, and automatically adjust pixel-by-pixel without shifting surrounding colors. In alpha tests, the prototype system has performed with impressively high accuracy and computational efficiency.

Learn more about how the Clemson team took advantage of the latest advances in deep learning to create ColorNet, and how they’re optimizing the model architecture’s layers and parameters to make real-world implementation possible. Read “Automated Brand-Color Accuracy for Real-Time Video,” in the April SMPTE Motion Imaging Journal.https://ieeexplore.ieee.org/document/9395672

Tag(s): AI , Featured , News , color science

SMPTE Content

Related Posts