As you’ve probably noticed, the conversation around immersive audio in M&E has moved past "if" and well into "how fast." The transition to baseline capability did not result from a single technical breakthrough. It emerged from the convergence of several factors: standardized delivery frameworks, IP-based distribution, increased device support and maturing production pipelines. As a result, immersive audio is now integrated into the primary specifications used to deliver content across M&E.
Broadcast Standards & Next-Gen Audio
ATSC 3.0 and DVB standards incorporate Next Generation Audio (NGA) frameworks that support object-based, channel-based, and scene-based audio. These frameworks are implemented through codecs such as Dolby AC-4 and MPEG-H Audio, both of which enable metadata-driven rendering and listener personalization. ATSC 3.0 deployments in the United States, South Korea, Brazil (as DTV+/TV 3.0), and the Caribbean include NGA as part of the system architecture. Similarly, DVB specifications include MPEG-H Audio as a supported format within European broadcast environments.
These systems enable features such as dialog level adjustment, alternate language selection, and device-adaptive rendering. In live production contexts, particularly sports, immersive audio is increasingly included alongside UHD video workflows. NBC’s 4K HDR coverage of the 2026 Milano Cortina Winter Olympics and Super Bowl LX included Dolby Atmos delivery on Peacock. Brazil’s DTV+ demonstrations at CES 2026 showcased MPEG-H immersive audio as a core platform feature. The FIFA World Cup 2026, slated for delivery across both broadcast and OTT pathways, represents the next major deployment milestone.
Open and Proprietary Format Development
The development that deserves the most attention right now is not the continued expansion of established ecosystems like Dolby Atmos — it’s the rapid proliferation of new format frameworks, both open and proprietary, that are collectively eliminating the last structural barriers to universal spatial audio delivery.
The Alliance for Open Media’s Immersive Audio Model and Formats (IAMF) specification, released in late 2023, laid the groundwork. Its commercial realization, Eclipsa Audio, developed by Google and Samsung with Arm optimization, moved from CES announcement to shipping product in under twelve months. Samsung’s entire 2025 TV lineup supports it natively. LG followed with IAMF support across its 2026 range and firmware updates for select 2025 models. Google has committed to OS-level Eclipsa playback in Android 16, Chrome, and Google TV. YouTube already accepts Eclipsa uploads, and a free Pro Tools plugin means creators can author spatial mixes without licensing overhead. The trajectory here mirrors AV1 for video: royalty-free specification, broad OEM pre-installation, platform-level distribution. The economics are no longer theoretical. They are deployed.
Apple’s response confirms the thesis from the opposite direction. ASAF (Apple Spatial Audio Format) and its companion codec APAC, announced at WWDC 2025, take a deliberately proprietary approach. The format is confined to Apple’s ecosystem and optimized for position-aware rendering on Vision Pro. It does not participate in AOM’s open licensing model. But the strategic signal is identical: Apple concluded that a first-party immersive audio format was a platform-level necessity. When both the open-standards coalition and the world’s most vertically integrated device company arrive at the same conclusion independently, the market has spoken.
The Road Ahead
What makes this moment consequential for M&E professionals is not any single format’s technical merits — it is the fact that format availability is no longer the bottleneck. ATSC 3.0 and DVB carry NGA natively. Eclipsa removes the licensing barrier for independent creators. ASAF ensures the Apple ecosystem is covered. Dolby Atmos remains entrenched across OTT and cinema. The constraint has shifted decisively from distribution and playback capability to production workflow readiness, particularly in live environments where many facilities still operate channel-based infrastructures that predate the object-based paradigm.
For organizations evaluating their audio strategy, the implication is direct: Immersive is no longer a differentiator to invest in selectively. It is infrastructure to maintain. The format landscape will continue to fragment and consolidate in cycles, but the underlying expectation — that spatial audio is a standard component of content delivery — is settled.
Now comes the operational question: Can production pipelines keep pace with what the distribution chain already supports…?
Learn more at Immersv.net!