<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

How AI is Changing TV Production for the Better

May 18, 2020

MIJ March AI image only-2-1

Advances in artificial intelligence (AI) and machine learning are providing ways for broadcasters to offer greater coverage of live events. This problem-solving technology is learning to create a series of shots that appear natural to the viewer for live recordings. This AI is also proving useful for scouring large amounts of data for news stories. It can also be used to enhance the experience for visual and audio impaired viewers.

Single-camera outside broadcast

Some may worry that AI systems are replacing human crew. In fact, the technology is currently being employed to cover events that would otherwise not be broadcast. The British Broadcasting Corporation (BBC) recently carried out research on the matter (AI in Production). This made use of ultra-high resolution (UHD), static cameras to simulate a two or three camera set-up.

The 4K resolution recorded by these cameras is much higher than is needed for broadcast. This means that different compositions can be made in the production process by simply cropping in on the footage.

Using these low-cost static cameras allows smaller live events to be covered with a one-man crew. It also allows an outside broadcast team to cover a greater number of locations. Including those that wouldn’t facilitate a larger setup. Different compositions and crops can be made in real-time, allowing footage to even be broadcast live.

Auto framing and sequencing

AI is being used in conjunction with these UHD static cameras. This allows it to automate both the required crops and the sequence of shots from the feeds.

The system uses basic composition rules that are second nature to camera operators to frame the shots. It also employs face detection technology to identify the subjects.  A selection of wide, mid and close-up compositions can then be learned to suit the subjects and their surroundings.

The AI system can be taught to put these shots into a sequence that feels natural to the viewer. This requires the system to consider the length of each shot and when the subject is speaking. Any reactions from the audience or other subjects are all considered.

In tests performed in parallel with a human operator, the system lacks the finer points of an experienced camera operator. However, the result is still impressive, and the AI is still being refined. Distracting elements breaking the edge of the frame or uncomfortably close crops are things a camera operator would know to avoid. Without specific rules in place for the AI, however, it doesn’t know to watch out for them.

Where AI live production works

At this time, AI technology is not refined enough to be used for high production value output. But it does offer the potential for budget broadcasting. A virtual multi-camera set-up could replace a cheap one-camera broadcast.

This would benefit panel discussions, vloggers and other live broadcasts. With a little refinement, this technology can be used for live performances, such as comedy, music and even sports coverage.

Finding news with AI

A smart production system developed by the Japan Broadcasting Corporation (NHK). This is designed to comb social media and environmental monitoring systems to report on newsworthy situations.

The system is taught to look for certain words and phrases divided into a series of different news categories. These are then grouped to allow the production team to view high-frequency reports. The system can also monitor water levels from government systems to alert of heavy rain and flooding.

Advanced facial and speech recognition

Face detection systems are being used to help with the cataloging of actors in various TV shows. Advances in AI allow for better recognition, even in situations where lower lighting or obscure angles would otherwise make it difficult.

Speech recognition is also useful for cataloging of news articles. The software is currently able to auto transcribe regular broadcast speech. However, it still struggles with some interviews where the subject may be talking faster or less clearly.

AI for audio and visually impaired

Automated audio descriptions can be created using AI for live programs such as sports broadcasts for the visually impaired. This generates a script of the game progress that accompanies the audio broadcast.

The technology can also be used to create audio commentary for sports for radio or television using a speech synthesizer. This has uses for both sports and news reports where facts and statistics can easily be generated. Speech synthesis technology is rapidly improving, and a natural voice tone is now possible.

Computer-generated sign language is also possible as a way of displaying broadcast information. Rather than a live signer, a CG animated character can now relay the content into sign language.

Other uses of AI in broadcast

The uses for AI and machine learning in broadcast are constantly expanding to enhance the user experience. AI technology is even being used to speed up the process of recoloring black and white footage. It uses archive material and color information from similar frames. This makes it possible to reduce the time to colorize five seconds of film from 30 minutes down to 30 seconds.

Discover more about the companies using AI in video production in the March 2020 edition of the SMPTE Motion Imaging Journal.

Tag(s):

Anthony Gentile

Related Posts