<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

Delivering the future

May 15, 2012

It is clear that the growing demand for content and platforms means that we need to make the best use of the bandwidth available if every member of the diversified audience is going to be satisfied. The fourth session of the SMPTE/EBU forum on emerging technologies looked at delivering the future.

Appropriately, the first speaker was Leonardo Chiariglione, widely regarded as the father of MPEG, and still a leading force in developments in delivery compression. He started his look forward by looking back, to see what were the expectations then and how they have come to pass.

20 years ago the thinking was that many technologies would be tried and the fittest would survive. In fact the Darwinian process did not take place: MPEG established a technology and everyone followed. Chiariglione suggested that the lesson we should take from that is that, in lean times, it is better to make inexpensive bets.

He went on to recall some less than successful developments of the era. The interactive CD seemed like a good idea, but failed utterly, as did the digital compact cassette. He argued that DAB was also not a success: the FM radio system was probably good enough, and fighting against it was a mistake.

On the other hand, MPEG-2 for digital television was a great example of a good technology for good use, efficient and effective even if there was room for improvement.

And although there were strong arguments against even including audio level 3 in the MPEG-1 standard because it was clearly too complicated to implement high compression for audio, today we cannot imagine a world without MP3 players. The business that took the MP3 and ran with it is now the biggest company in the world.

MPEG-4 succeeded for fixed and mobile internet; MPEG-7 failed as a metadata standard. The lessons learned may give us a guide to the future, Dr Chiariglione suggested.

Next was Albert Heuberger of the Fraunhofer Institute. He looked at the complex environment today, with a wide number of creative platforms and resolutions feeding an increasingly diverse set of platforms.

The first step is to move to a content agnostic production environment, including camera and sensor format metadata to inform downstream processing. The first stage is likely to be simple, light touch compression to enable content to be widely exchanged, along with object-based audio. Previews will be in the cloud. Enhancements like colour decision lists will be in the metadata. Virtual processing workspaces will allow multiple collaborative users, all working around a common mastering format, from which the deliverables will be created, adaptable to any consumer device.

This is where the next generation of compression will start, with high efficiency versions using the latest gains in processor power to reduce the bandwidth requirements while boosting quality. High efficiency codecs are as appropriate to audio as they are to video, not least because it means content can be delivered even over crowded mobile networks.

Dr Heuberger also talked about MPEG-Dash, the new standard for Dynamic Adaptive Streaming over HTTP. The goal of MPEG-Dash is to provide a common streaming solution which will always deliver the best possible quality whatever the capacity of the bearer, aiming to fill the gap created by competing proprietary formats.

Another interesting audio facility is the dialogue enhancement service, which allows each user to alter the balance between speech channels and effects, for instance to enhance or reduce the natural sounds of a sporting event against the commentary. It is all part of the future move towards more immersive viewing and listening experiences on all devices.

From Ericsson, Giles Wilson talked about his company’s consumer lab, which researches views on devices, services and future requirements, carrying out 80,000 interviews every year around the world, giving a good understanding of what people think. The most recent research suggests that consumers are no longer thinking about devices but the viewing experience.

The research discovered six roles for the tablet, for instance, around video consumption. These included a smarter remote and a tool for discovery as well as a viewing screen. Similarly, the smartphone has a range of applications, of which actually watching content is regarded as a last resort, only to be used when no other screen is available.

More than 40% of all consumers, according to Ericsson research, now use social media from the television sofa. This is no longer a young person’s phenomenon. For all, viewing behaviours are both triggering and triggered by social interaction online.

One other very interesting finding was that consistently consumers put quality – of content and of service delivery – at the top of their list of expectations. They will watch video on a smartphone, but only if they have to. OTT streamed video is acceptable only if that is the only access to the content. Consumers like big screen televisions, they want their content on it, and they want it to look good.

All of this leads to a continuing and growing demand for network capacity. By 2015, 90% of all network traffic will be video. So codec improvement is an absolute necessity. The next generation of codecs, HEVC, is already showing a 53% improvement over AVC.

Finally Mark Richerof ATSC took the stage. His presentation was called “a bright future for terrestrial broadcasting”, and he put forward the case that conventional broadcasting is still relevant, not least because it is infinitely scalable: it does not matter at all how many people are tuned it, it delivers the same quality of service.

ATSC 2.0 as a delivery platform has just been ratified, with full backwards compatibility with existing transmission system, and will be rolled out in the coming months and years. But in parallel ATSC 3.0 is being developed, which will be a complete clean sheet start.

On top of that, ATSC is a leading player in FoBTV, the future of broadcast television, a plan to develop a new digital terrestrial format that will be universally accepted worldwide. Last November – at 11:11:11 on 11/11/11 to be precise – FoBTV published a declaration that broadcasting is the most spectrum efficient delivery medium, and that broadcasters and technology bodies should come together to pursue improved standards globally.

Given this global commitment not only will the new standards be the most robust, they will benefit from mass production slashing costs. It also means that mobile and handheld devices which move freely around the world will work wherever they go. The next generation systems should focus on broadcasting to devices that are on the move.

FoBTV is not a new standards organisation: it aims to work with standards bodies to achieve its global goals. Its chair is Mark Richer of ATSC, with Phil Laven of DVB as deputy chair. Richer suggested that this is a defining moment for terrestrial broadcasting.

In response to a question, Giles Wilson suggested that the demand for new content delivery always has run ahead of the gains in coding efficiency, and he fears that this will always be the case. Leonardo Chiariglione, on the contrary, suggested that he could see no sign of compression running in to a brick wall in the near future.

Albert Heuberger thought that there will be new ways of identifying redundancy in the signal which can be exploited in the future to achieve new coding efficiencies. Researchers still have a wide field ahead of them, he said.

Commenting on the search for standardisation, Giles Wilson referred back to one of the students describing the internet as the Wild West. He certainly agreed with that as a characterisation, but pointed out that despite the apparent lawlessness it had managed its standards much better than broadcast.

Tag(s):

SMPTE Staff

Related Posts