Join Tektronix in our free video seminars to learn about the key measurements and test techniques for Digital TV, MPEG Test and File Based Workflows. Simplyclick here to register.
Hot Button Discussion
By Michael Goldman
For those interested in video distribution and broadcast advancements, the arrival of the new High Efficiency Video Coding (HEVC) standard−likely, but not yet officially, to be dubbed H.265-is an exciting development. The reason is evident in the word "efficiency" in the name. The notion is that the new standard−expected to be formally released in early 2013−will be two times more efficient than the last great leap in video coding schemes from about a decade ago, Advanced Video Coding (also known as MPEG-4 AVC and H.264). In a world where there is unending demand for video content to be delivered more efficiently, as well as look and sound better than ever before while traveling to a broader range of devices, significant bitrate savings are, literally, everything to both those sending and those viewing broadcast signals.
In fact, as Matthew Goldman, Senior VP of TV Compression Technology at Ericsson, a SMPTE Fellow and member of the SMPTE Board of Governors, views it, the industry has reached another touchstone moment with compression, about ten years after the AVC touchstone moment, which came about ten years after the MPEG-2 Video touchstone moment.
"There is a reason there are significant improvements in video compression about every ten years−Dr. Moore's Law," Goldman suggests. "There is a doubling of transistor density every 18-24 months, and because of that, you can get more memory to store more intermediate results and more processing power, which lets you do more operations in parallel. So that is why AVC is able to achieve twice the bandwidth efficiency of MPEG-2 Video, and now, we believe that HEVC has the potential to provide a 50% bitrate savings over AVC."
At press time, Goldman said the HEVC Main Profile was in the process of being finalized by the Joint Collaborative Team on Video Coding (JCT-VC)−a joint effort involving coding experts from both the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG)−and was on track for the publication of the standard in January 2013. He expressed confidence that HEVC will reduce the data rate needed for video coding by 50%, compared to the current rate with the AVC standard, and that HEVC is in line to continue the accelerating upward arc of video compression schemes.
In reviewing the history of compression technology, Goldman points out that the new standard is the latest step forward in the quest for bandwidth savings that began in the early 1990s, first with MPEG-1 Video and then MPEG-2 Video. From that era, when relatively new direct-to-home digital broadcast systems such as cable or direct broadcast satellites could suddenly offer consumers four or five standard-definition digital channels through the same 6 MHz bandwidth (7 or 8 MHz bandwidth in other parts of the world) that analog television used to deliver a single video channel, to later implementation improvements when it became viable to send about twice as many digital standard-definition TV channels through that same path, to current developments, he suggests the advances have been startling. But once HDTV finally became the broadcast standard, and then a plethora of mobile devices arrived to increase the need for compressed video to travel to new platforms for viewing, he explains the work that led to HEVC became crucial to content creators, distributors, consumer electronics manufacturers, and ultimately, to consumers themselves.
"The point is that digital television was not only about picture quality as it was advertised−it was also about bandwidth savings so that service providers could deliver many more viewing options to consumers," he explains. "You can send more channels than ever before within the same bandwidth, and that is a major reason why digital television is better than analog. This has enabled an explosion of channel options, and consumers continue to have an insatiable appetite for more channels. So, now, for content providers, it is all about who can deliver the most channels. And today, this has become amplified by the need for multiscreen delivery−that is, the delivery of content to multiple devices, each of which requires a different format to be decoded and rendered."
"When HDTV was first adopted by terrestrial broadcasters, a single MPEG-2 HD channel required the use of the entire available bandwidth−approximately 16 Mbits/sec in 6 MHz. Over time, encoding equipment improved to the point where the broadcaster no longer needed all available bandwidth to deliver MPEG-2 HD, and so, they had extra bandwidth for additional channels. The same improvements led to a higher mix of HD and SD channels from cable and satellite service providers with fewer constraints."
"But, eventually, it gets to the point where there are not that many more improvements you can make (on the encoding side) with an implementation, and so the push to improve performance needs to come from the video compression standard itself. So AVC was developed with the goal to get to half the bitrate of MPEG-2 Video for the same perceived picture quality. It took a while, but in the mid to late 2000s, AVC implementations achieved that goal. The other advantage of AVC was that it unified, for the first time, TV formats with video formats for Internet and mobile delivery on multiple platforms that had previously been using several disparate alternative video compression formats."
This unification process, Goldman adds, will continue with HEVC. Among the interesting aspects of its arrival is the fact that it opens the door for the practical implementation of a delivery system to get 4KTV signals into consumer homes and onto what are now, brand new 4K televisions. Sometimes referred to as Quad HDTV, because the picture resolution is about four times beyond that of 1080p HD, 4KTV has already arrived in the sense that manufacturers are starting to offer viewing systems. In fact, at the 2012 Consumer Electronics Show (CES), most major TV manufacturers announced and exhibited 4KTVs. They are making the devices available, as they did with 1080p, long before broadcast entities are able to send them 4KTV signals. But HEVC's arrival means that, in the not so distant future for cable and satellite service providers specifically (see more on terrestrial broadcasters below), this is likely to change because HEVC makes it bandwidth-viable to deliver a 4KTV signal to consumers.
"So, now, we are in a place where HEVC helps on the low end of bitrates to greatly improve the quality of a picture on a mobile device that, until now, hasn't been that great coming through the tight channel bandwidth of mobile cellular networks," says Goldman. "And on the high end of consumer bitrates, on the other end of the spectrum, you have this new higher resolution format. Manufacturers are betting that consumers will go for the best possible picture resolution available, and so they are offering 4KTVs. That, in turn, drives broadcasters to support a 4K signal, which requires significantly more bandwidth than HDTV, and so, HEVC becomes even more important. Early studies have shown that HEVC will achieve the goal of half the bitrate of AVC, and that 4KTV will require roughly twice the bandwidth of HDTV in the compressed domain−maybe a little more. That is still a rough estimate, but we can say that by using HEVC, satellite or cable will be able to broadcast a 4KTV signal, and use less bandwidth than when MPEG-2 Video compression is used to send an HDTV signal today, or about the same bandwidth of HDTV using AVC compression."
For now, however, consumer devices of all stripes, including billions that were engineered around MPEG-2 Video, remain in play for consumers. So, with HEVC being a clear upgrade from a technical point of view, and being published in early 2013, it begs the question, Goldman suggests, as to how soon consumer electronics' devices compatible with HEVC compressed material will become available.
All sorts of factors are involved, of course, with such a question, including the vagaries of the economy generally. But going by history−what happened with MPEG-2 and AVC-Goldman suggests that manufacturers usually follow new standards with products within 12 to 24 months, with some companies diving in immediately, when the technical aspects of the HEVC standard were being finalized at press time, while others will be more cautious and wait until the standard is official in January.
"So, potentially, you could have the first HEVC devices around by the end of 2013 or the first half of 2014," he says. "Software-only implementations, of course, can be available much earlier, and I'm sure someone will do that. But for hardware devices, like set-top boxes or television receivers, the availability of those devices is dependent upon the decoder silicon chips used inside them. So, really, it is the silicon chip manufacturers who will drive the majority of the consumer electronics that will follow this standard."
Lots of other issues have to be addressed with HEVC's arrival, he says. One of the biggest challenges will be how to incorporate it for over-air terrestrial broadcasters, where infrastructures are built around existing regulatory requirements.
"Over-air broadcasters can't use (HEVC) until regulatory standards change," he says. "You need a regulatory agency to give approval, like the FCC in the United States, and that process will obviously take time. It might be 8 to 10 years before a new over-air broadcasting standard can be done."
Goldman emphasizes that next-generation broadcasting efforts, such as the Advanced Television Systems Committee's "ATSC 3.0," are examining this issue as part of a larger discussion about the future of terrestrial broadcasting. Additionally, "while it's great to have a new compression standard that supports higher resolution images, the acquired higher resolution images, before they are compressed−while still in the uncompressed domain−require more bits to be transported around the broadcast facility and delivered to the encoder. So SMPTE is also working on interfaces that support uncompressed 4K and beyond signals. For 4K resolution, a minimum of 3 Gbits/sec is needed. Back in the day, broadcasting facilities were built to transport 270 Mbit/sec SD signals, and later HD plants (were built) to transport 1.5 Gbit/sec signals, so you can see how things have changed."
Efforts similar to the ATSC 3.0 work are also under way in other countries. In fact, Goldman points to the Future of Broadcast Television (FOBTV) initiative−a consortium launched earlier this year to examine how to create a framework for defining requirements for terrestrial broadcast systems of the future, and to explore the feasibility of unified terrestrial broadcast standards, including the possibility of HEVC's inclusion as a compression standard.
Matthew Goldman presented a paper on the future of encoding and HEVC's role in that future at the 2011 SMPTE Annual Technical Conference, which you can find here. An updated version of the paper was published in the July/August issueof the SMPTE Motion Imaging Journal.
Broadband Speed Wars
If recent news is any indication, the issue of broadband speed lies at the heart of the increasingly intense residential broadband wars between the nation's primary service providers. Recently, Verizon announced a 300 Mbit/sec download/65 Mbit/sec upload service tier, dubbed Quantum FiOS, that it has been advertising as the fastest available to consumers. Now, in a new report from the Broadband Reports website, rival Comcast is reportedly responding to that challenge with a new 305 Mbit/sec downstream tier offering to come out some time before the end of the year. The report states that details have yet to emerge, and that vagueness alludes to a related issue that the industry has been grappling with−the fact that ISP's are not always delivering the speeds to consumers that they advertise. In fact, that was the next story−how the FCC is analyzing this trend−on the recent Broadband Reports site, next to the Comcast report.
OAM: Faster Still
The notion of going far beyond current broadband communications speeds is hardly far-fetched−lots of work is being done in that area. One of the more promising developments in this regard was discussed in a recent report from TechNewsWorld, suggesting that progress has been made in the evolution of Orbital Angular Momentum (OAM) multiplexing technology. The article states that the progress has the potential to "change the way researchers think about communications and imaging for generations." The report refers to research being conducted by the Optical Communications Laboratory at the University of Southern California on the OAM process, which involves twisting beams of light together so that they can carry greater amounts of data at greater speeds. A recent test of the technology discussed in the article reportedly transmitted wireless data at a blazing 2.56 Tbits/sec, supposedly thousands of times faster than modern broadband cable. The article explains that scientists manipulated eight beams of light, "twisting each one into a helical shape and sending it through free space to a receiver in the lab." Supposedly, twisting the light beams allows increased data capacity in ways that go beyond current WiFi and cellular techniques. Hurdles remain to making it practical to use for business and consumer applications, however, because it currently requires clear line of sight and no interference whatsoever. But scientists believe, eventually, it might have great applications for satellite and space communications, and they also think the technology could be adapted to fiber-optic cabling.
Microsoft's Optical Display
An interesting article in TechNewsWorld and another at the Digital Trends website both discuss Microsoft's development of the optical display it will be putting onto its upcoming Surface tablets. The reports claim the display will be more advanced than Apple's much ballyhooed Retina Display because it does not eat up data to produce a picture the user can seamlessly interact with. Instead, Microsoft's Optical display is built around the technology it originally called PixelSense for its original Surface table-style tablet technology, which involved a camera and projector beneath table glass that could track what the user was touching and communicate with software to move imagery according to a user's hand movements. The original table approach was bulky and not practical for consumers. But the new reports from Microsoft watchers suggest the company has figured out a way to make the glass itself into, essentially, a simultaneous camera and Solid State touch interface which "sees" what your hand is touching and reacts to it, and can determine between your touch and someone else's, but without massive data overload. The reports state that a third-generation version of the technology is the one that will appear with the new Windows 8 Surface tablets when they debut later this year.
Opinions expressed in SMPTE Newswatch do not necessarily reflect those of SMPTE. Reference to specific products do not represent an endorsement, recommendation, or promotion.