Blog

Reflections from the 2026 HPA Tech Retreat

Written by Andy Beach | Feb 26, 2026 1:19:21 PM

When AI Becomes an Operating Model

This year's HPA Tech Retreat did not revolve around whether AI works. That question is effectively settled. The demonstrations are compelling, the case studies are real, and the tools are already embedded across parts of the production lifecycle. What shifted this year was the level of seriousness. The conversation has moved from capability to integration, from experimentation to operating model.

I was in the room for Tuesday’s “WTF Is Going On in the Media Industry?” Supersession, and the tone was notably optimistic. There was real enthusiasm around what generative tools are unlocking, whether its smaller teams producing higher-fidelity work, independent creators accessing capabilities that once required studio-scale budgets, or faster iteration cycles across concept, previs, editorial, and VFX. The idea of AI as an amplifier of imagination came up repeatedly, and not in a defensive way. It felt less like a provocation and more like an accepted premise. But the optimism was anchored by discipline.

Ed Ulbrich, Head of Media and Entertainment at Moonvalley, put it plainly during his opening keynote: “There is no AI currently that understands what is good. That’s still on us.” He followed that with a sharper distinction: “AI is probabilistic. Great storytelling is not.” That framing clarified the boundary. AI systems predict and generate based on patterns. Storytelling often depends on decisions that are not statistically safe, on creative risk, on judgment calls that require taste and accountability. The room did not push back on that idea. If anything, it leaned into it. There was a clear understanding that generative capability does not equal authorship.

Another theme that surfaced, sometimes explicitly and sometimes indirectly, was that media production is beginning to behave more like software development. Cycles are compressing. Iteration is continuous. Teams are smaller. Concepts move to execution faster. That shift changes how leverage works inside a production pipeline. When content begins to “ship, learn, and iterate,” the infrastructure supporting it must evolve accordingly. Software ecosystems require version control, documentation, governance, and interoperability. As AI embeds across pre-production, editorial, VFX, and finishing, media workflows inherit similar complexity. The tools may be creative, but the system dynamics are increasingly technical.

The anxiety in the room was not about whether AI can generate convincing images. We have already crossed that threshold. Several demonstrations and discussions including the conversations around high-fidelity generative video systems like Seedance made it clear that synthetic media is no longer easily distinguishable from live action in many contexts. The technical capability is advancing quickly, and few in the room were arguing otherwise.

The more candid discussion centered on agency. As many noted, what unsettles people is not the technology itself but the potential loss of relevance and control. Once realism is achievable, the questions shift upstream. Who decides what is good? Who owns the iteration cycle? Who benefits economically? How is authorship documented? How is consent tracked? Those questions are not theoretical. They surfaced in compositing sessions, in discussions about lineage and logging, and in repeated references to consent, compensation, credit, and commercial safety. Ethics were not discussed as branding language. They were described as operational requirements.

Cost compression was another recurring thread. Several examples highlighted dramatic reductions in time and team size. Projects that would have struggled to secure financing under traditional cost structures are becoming viable through AI-assisted workflows. That is meaningful. But compression alone does not determine outcomes. Faster cycles can translate into creative freedom, or they can translate into smaller crews doing more work under tighter budgets. AI shifts leverage; it does not decide where that leverage ultimately settles. The economic model is still in motion.

One of the quieter but more consequential themes of the week was interoperability. It was clear from both formal sessions and informal conversations that no single vendor will provide an end-to-end AI production stack. Studios are assembling ecosystems: generative engines, asset management systems, editorial tools, VFX pipelines, sound models, cloud infrastructure, and governance layers. In that environment, metadata becomes structural. If lineage does not travel with assets, authorship becomes opaque. If consent and credit are not traceable, trust degrades. If systems do not interoperate, workflows fragment. Human-centered design, as referenced throughout the Retreat, is not simply about protecting artists from automation. It is about preserving agency and accountability inside increasingly automated systems.

The shift from “Can AI generate this?” to “How do we govern, document, and integrate this?” signals that we have entered a new phase. This is no longer an experimental edge case. When AI is embedded across multiple lifecycle stages and across multiple vendors, coordination becomes essential. That is where standards bodies have a meaningful role to play. SMPTE does not need to define creativity. It does need to support the frameworks that make creative work portable, interoperable, and durable in an AI-enabled environment. Metadata continuity, authorship tracking, and system interoperability are no longer background technical details. They are prerequisites for trust at scale.

The overall mood at the Retreat was not apocalyptic nor was it dismissive. AI is here, it is improving rapidly, and it is being integrated into real production environments. At the same time, the most important responsibility remains human. AI does not understand what is good. It does not assume accountability for risk. It does not defend a creative decision. That remains the domain of professionals.

The week did not end with a conclusion about where the industry will land. It did make one thing clear; we are no longer debating whether AI belongs in the pipeline. We are determining how to structure the pipeline around it without losing authorship, coherence, and trust. That is an operational challenge, not a philosophical one. And it is now underway.