<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Mark Your Calendar! April 26th booth selection meeting for MTS2024
Donate

Frame-rate manipulation for the file-age

October 23, 2014

Bruce Devlin, Chief Scientist at Dalet, and Simon Adler, Market Development of Dalet and AmberFin solutions, addressed the topic of frame-rate manipulation in the file-based environment. Devlin told the tale of being accused by a professor of "reinventing the wheel " when he came up with a solution that had already been invented. Devlin pointed out that we need lots of different kinds of wheels to solve many different kinds of transportation. "There's a relationship of the tools and the application of where that problem lives," he noted.

Tools change and so do requirements, Adler and Devlin noted. "We have all these shooting formats now," said Adler. "Even if our transmission chain is interlace, it ends up progressive, and we need to think about that." Devlin pointed out problems in communication. "When somebody says they're shooting at 30 fps, what do they mean?" he asked. "It probably means they have a 1080 line sensor with a lens delivery system, a time base of 30/1,0001 interlaced frame per second..." We have many different formats in the field, giving today's common TV interchange formats. "Frame rate conversion is simply converting files and streams with a mix of these source formats into one output...without the viewer knowing," said Adler.

An oversimplified problem statement: The source of our content is a lens delivery system/sensor and a digitization system with a timing system in the middle. We have to make up pictures that never existed in a way that viewers don't know they're made up," said Devlin. Bottom line: the "really over-simplified problem statement" is in fact extraordinarily complex. With a HD display and perfect freeze frame capability that they hold within a few inches from their faces, noted Devlin, they can easily see artifacts such as poor de-interlacing, de-interlace failure, field blend, edge failure and detail failure. "We need a toolbox with many reinvented wheels so I can use the right tool at the right time," concluded Adler.

Devlin referred to a paper he wrote on frame rate conversion tools. "What's changing today is we're moving into different software solutions that can be used on the GPU, CPU and cloud," he said. "And that changes the economics. Everyone's margins are being squeezed and we have to take that into consideration." He showed a chart of user modes, and then took a deeper dive into a handful of use cases. "Possibly one of the most common use cases is shot for TV at 1080 x 25i or 1080 x 29.97i," he said. "There are a lot of quality metrics about what's good or not. The algorithms are very complicated. But if you're a sports producer you just want it to work." They also looked at issues that arise with movies edited for TV. Things have also gotten more complicated with OTT distribution, said Devlin. "Choose your artifacts because there will always be artifacts," said Adler.

Devlin showed a table outlining "quality vs cost" including CAPEX and OPEX costs. "Not all tools can be put in the cloud," he said. "Some tools may not give great picture quality but they're very fast. If it's breaking news and you just need it converted to your frame rate, speed may be more important for the first conversion. Trying to decide whether quality or cost is the best model isn't the right question. You need to be able to choose the right tool for the right time to meet your business needs."

The future? There's no sign it's going to get any easier, Adler and Devlin said. "By HPA 2015, we might have some more thoughts," Adler concluded. Stay tuned!

Tag(s): Lense , OTT , OPEX , CAPEX , HPA

Debra Kaufman

Related Posts