<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Early Registration for the 2024 Media Technology Summit Is Now Open

Artificial Intelligence Creates Smart New Opportunities in Live Sports

April 19, 2021

At the NAB 2020 Broadcast Engineering and Technical Conference, Adrish Bera, the senior vice president of artificial intelligence and machine learning at Burbank, CA’s Prime Focus Technologies, presented a white paper on “Artificial Intelligence: Transforming the Live Sports Landscape.” In the paper, Bera outlines a compelling view of how dramatically Artificial Intelligence (AI) and Machine Learning (ML) promise to revolutionize sports with immersive experiences that bring fans closer to the real-time action. Broadcasters and fans can look forward to:

  • Content discovery on a previously untapped scale as the producer’s active/live events and entire sports archive become searchable
  • Massive efficiencies in the creation of game highlight packages which can be used within the broadcast, or published quickly to social media
  • Easy retrieval of deep content for new, monetizable projects like career retrospectives
  • User-specific interactive experiences with metadata overlays, video-clips added to scorecards, and near-instantaneous highlight playback

Machine learning is dependent on a large volume of training data to build a classification algorithm that can better predict results with each added set of test data. To deliver the most benefits using AI/ML, it’s critical to tag a sport exhaustively with extremely high accuracy by painstakingly sifting through hundreds of hours of game footage, segment by segment, annotating frames, objects, actions and more to generate metadata for machine learning. Bera notes that, “The more details we tag in a game, the richer these downstream use cases.”

Also required for machine learning are basic logic and rules -- with specific models built for specific sports and their production methods -- for the machine to mimic human cognition. A wide variety of classification and cognition engines are used in tandem to “discern” the game from different perspectives, with their results then “stitched together” using game logic and understanding of sports production to catalogue the game.

These cognition machines might detect simple or complex game objects (like the ball, elements of the field and player formations), facial recognition of the players, optical character recognition to discern the score, movements of objects like the ball, audio elements like the contact of a bat with the ball or the crowd excitement noise, along with natural language processing in order to transcribe the game commentary.

The accuracy of the algorithms is good, and getting better with improved resolutions and capture of video at higher frame rates. Meanwhile, more nuanced elements, like AI-generated highlights packages are using machine-learnable rules built from past game highlights, to tell a compelling story much as a human editor might. These too will improve over time as the engine learns what works and what doesn’t, along with unique situational variations.

The potential for using AI and ML in the live sports arena is enormous, to reduce cost of operations by automating routine tasks, increase monetization from ads and archival content, and improve viewer engagement through enhanced storytelling. For a deeper dive into all the possibilities, read the complete white paper in April’s SMPTE Motion Imaging Journal.https://ieeexplore.ieee.org/document/9395670

Tag(s): AI , Featured , News

SMPTE Content

Related Posts