SMPTE Newswatch Masthead

Hot Button Discussion

Compression in Context
By Michael Goldman  

Since Newswatch reported a year ago on the ongoing rollout of the High-Efficiency Video Coding compression standard (HEVC, also known as H.265 and MPEG-H Part 2), the industry conversation about HEVC’s importance as the necessary lynchpin that enables the transmission of ultra-high-definition (UHD) content to consumer homes has not changed much. Quite simply, “for UHD specifically, basically the only way to bring that content to the home efficiently is to use HEVC,” states Matthew Goldman, senior vice president of TV compression technology at Ericcson and a SMPTE Fellow, reiterating a point he made in that September 2014 issue of Newswatch. “We needed a much better compression scheme than what we were using previously for UHD, and HEVC was that scheme. So right now, that is where HEVC is gaining traction—in places where you can’t [compress a so-called 4K signal and transmit it to consumers] any other way. That is where we are now seeing HEVC implementations. But for more traditional television, such as linear [HD] broadcasting, HEVC’s rollout is more in the future—we don’t expect it to happen quickly. It will be ongoing.”
 
Related to that point is the fact that we live in a multiplatform universe, with content streaming to consumers via IP-based systems large and small to devices of all types beyond traditional “televisions,” and currently, relatively little of it is 4K content. Therefore, as Yves Faroudja, co-founder of Faroudja Inc. and a SMPTE Fellow, points out, there is still room and relevance on the landscape for older compression schemes that do not require all the bells and whistles of HEVC.

That’s why HEVC’s predecessor, AVC [H.264] continues to be widely used, as do older MPEG standards for certain applications, as well as some derivative approaches, such as Google’s VP8 and VP9. Therefore, due to this hodgepodge of formats and schemes there is an essential, basic question to grappled with, according to Faroudja—“how are we going to put compression in context? How are we going to optimize the chain?”
 
In other words, to fully understand where compression is heading, we need to understand its role in a much larger paradigm as the broadcast industry transitions on one branch to UHD, and on the other branch into an IP-based world consisting of all sorts of different types of transmission methodologies, content requirements, and viewing platforms.
 
“I consider compression as one of the elements of a larger chain ultimately,” Faroudja explains. “The chain starts in front of the camera with lighting of the set, and ends up in the eyeball or brain of the viewer at home. Compression is part of the mechanism that [makes that possible]. But there are so many other things [that are part of the image], like resolution of the chosen standard, whether it is high dynamic range [HDR] or not HDR, whether it is better color or not, and much more. There are many things that happen before and after the video is compressed. I believe there are elements that are overloading compression schemes unnecessarily—no matter which compression scheme you are using. The idea of processing video before it is compressed and after it is compressed is about trying to control such elements.”

Several high-end, professional video encoding technologies have traditionally implemented pre-processing tools within the encoding hardware before actual compression functions get under way. Alternatively, Faroudja’s company and several others address this issue with a variety of pre- and post-compression tools separate from the video encoding system. They are designed to attack, on the front end, things such as random noise, to better manage high or low-frequency artifacts, and to deal with artifacts introduced at the input stage following image processing work during production and post-production, among other things; and on the back-end, to deal with a wide range of potential compression-induced artifacts, depending on the type of compression scheme being utilized.
 
But beyond this general strategy for the landscape at large is the reality that even on the high end of the broadcast summit, things remain unclear in terms of what we mean by the term “UHD” today, versus what form UHD will take when it eventually replaces high-definition on a wide basis as the broadcast standard. Thus, it remains unclear what form HEVC will have to take to keep pace on this ever-changing landscape.
 
Indeed, Matthew Goldman suggests that those focusing on the narrow definition of UHD as, generically, “4K” resolution images are largely missing the point, because what is really looming, in his opinion, is a UHD that includes more immersive viewing experience features. He points out that the definition of “UHD” itself has grown, and the industry needs to sort out the nomenclature differences between the current so-called UHD-1 Phase 1 (basically, the set of standards that define 4K resolution compatible television systems) compared to UHD Phase 2 and beyond, which are proposed definitions for broadcasting UHD signals to consumer homes that also include additional immersive viewing features. A white paper published by Ericsson, earlier this year, explains that UHD-1 Phase 1 refers to images that are “just four times [3840x2160] the spatial resolution of [1080i HD], although progressively scanned up to 60 frames/sec. The next generation would be UHD-1 Phase 2, which would include higher dynamic range and other features that have not all been determined, but are intended to increase the 'wow' factor of UHD viewing.”

This leads to Goldman’s related point that “there are other items [besides resolution] that make for a more immersive television watching experience. These include high dynamic range, wider color gamut, and to go along with those, at least 10-bit sampling precision. All three factors are needed to have the real immersive home viewing experience the industry is pursuing. This is not an official name, but colloquially, we have named these features collectively 'HDR+.' Together, they give you a color volume—more colors, of course, but also richer, brighter, and deeper, more realistic colors, thanks to the higher dynamic range. There is another higher realism factor—higher frame rates, but that mostly applies to high-motion content, complex motion, particularly where sports is concerned.
 
“No one ever claimed that video compression improves picture quality. It doesn’t, by definition. If service providers had the bandwidth, they wouldn’t compress video. However, the reality is that bandwidth is a scarce commodity, and UHD requires a lot more than HD. Therefore, more efficient video compression algorithms—such as HEVC—need to be developed.”
 
Goldman says that early experiments have raised concern about the compression efficiency of some proposed HDR+ methods, so there is a new effort under way to investigate possible changes to the HEVC standard to address this issue. As he related last year in Newswatch, HEVC’s original iteration, in early 2013, specifically defined a new standardized codec for consumer-related, direct-to-home applications for things like digital cable and direct broadcast satellite transmissions. The second upgrade came in April of 2014, with the Range Extensions (RExt) of the original standard, designed to primarily cover professional applications, such as content acquisition and distribution. Also added last year was a third extension of the specification, known as Scalable High-efficiency Video Coding (SHVC). That, Goldman says, remains very much a work in progress, designed to build upon the long-standing concept of allowing encoded video streams to carry a so-called “base layer” that all decoders could read, along with “enhancement layers” that would be used or not used, depending on the nature of the end-user’s device to enhance the experience.
 
But in any case, that is hardly the end of attempts to upgrade HEVC. Currently, according to Goldman, there is “new work” afoot inside the Moving Picture Experts Group (MPEG), looking for improvements that will make HEVC more efficient when transmitting HDR+ content to consumers.

“MPEG issued a call for evidence this year to see if there are improvements that can be made to HEVC to allow it to more efficiently encode HDR+ material,” he says. “That call was issued earlier this year, responses against it were made in June, and now, they are forming what is known as a ‘fast track update’ to HEVC. The reason for the fast track is that the industry wants UHD-1 Phase 2 to be ready for deployment in 2017.”
 
As with any complex standard, however, that is easier said than done. Goldman says that, at press time, there were “a half-dozen proposals before the [Joint Collaborative Team on Video Coding that originally developed HEVC as a standard and its predecessor, AVC, as well—JCT-VC being a partnership between the ITU-T Video Coding Experts Group and MPEG]” on this topic, and that “it has not been determined yet exactly which way they are heading. However, there is a strong desire not to make low-level changes to HEVC so as not to ‘break’ existing implementations. But the goal is to have the standard more efficiently encode HDR+ and to have the new syntax and semantics text finalized by October 2016. It typically takes three or more years, so you can see, to create an international standard by then, this literally is on a ‘fast track.’ ”
 
These ongoing efforts to improve and/or add to what HEVC can do and how it can do it reinforce Goldman’s belief, as he suggested to Newswatch last year, that video compression needs to evolve into some kind of modular, foundational standard for the industry that will eventually, over many years, add features to address new challenges, such as HDR+, as they present themselves. He called this “the Holy Grail of reconfigurable video coding (RVC),” meaning some kind of specification that is downloadable to devices as algorithms are developed to improve it, rather than forcing users to continually buy new hardware as entirely new specifications are developed.

Goldman adds the industry “isn’t close to being there yet, but a lot of people are buying into the virtualization aspect of the idea.” And, he adds, “It is true that things are changing more rapidly, and how we are designing things are more modular. Some people do believe, for instance, that splitting up the elements of the encoding processor into separate modules would improve our ability to [change things], because we could then act on each [issue] separately.”
 
Meanwhile, one area of great movement these days involves the streaming of 4K video with services like Netflix, Amazon Prime, and others. There, depending on connection speeds, HEVC and VP9 are already both being utilized in various places, but in some cases, customers with slower connections are still unable to join the party. For contribution-quality applications, Goldman says a range of lightly compressed coding technologies are making strides across the industry to permit UHD data to travel through 10-Gigabit Ethernet connections, rather than requiring the use of 12-Gbits/sec SDI, “leveraging off existing IP infrastructures to support realtime viewing.”
 
Some industry watchers point to a range of innovative companies pursuing the challenges of streaming high-end video through smaller pipes by pursuing new codecs they hope will be capable of compressing 4K video beyond what HEVC or VP9 currently do, potentially allowing for multiple streams of 4K video over connections as slow as 20 Mbits/sec simultaneously. For example, one company out of the U.K., called V-Nova, made a splash debuting such a scheme, dubbed Perseus®, at the TV Connect Conference in London in April and again at NAB earlier this year. According to V-Nova claims, the aim of the product is to shrink UHD video by as much as 50% more than HEVC, theoretically allowing it to be transmitted over slower connections, and thus increasing the market for streamed 4K product. Other approaches are being pursued across the landscape as well. Industry experts warn that, thus far, none of these approaches have been thoroughly confirmed by independent studies as being capable of performing consistently at the levels being claimed. But the mere pursuit of such schemes does appear to herald the potential for a paradigm shift in the compression world, in the sense that a wide range of targeted solutions for the seemingly never-ending list of content platforms and transmission methods seems to be a strategy sectors of the industry will be pursuing for the foreseeable future.

    
News Briefs

Talking VR at Siggraph
Not surprisingly, virtual reality (VR) was a huge topic at the Siggraph 2015 Computer Graphics Conference and Exhibition in Los Angeles earlier this month. VR was covered in a couple of different ways during two different panels that helped open the conference, according to coverage in the Hollywood Reporter. First, a filmmaker panel discussed the production challenges of physically making content for the still evolving format. Speakers from companies that are diving into making content for some of the new VR systems that are hitting the market, or about to, included people from companies like Oculus Story Studio, Digital Domain, and IM360, among others. Carolyn Giardina’s coverage for the Hollywood Reporter said that speakers particularly warned attendees about the growing technical challenges involved with producing content for expanded 360-degree stereoscopic presentations. Among these challenges were larger data volumes, tracking, and rendering, and the subtler issue of making sure the resulting product would not leave viewers disoriented because if mistakes are made, it can mean re-rendering massive amounts of data all over again, among other issues. Digital Domain’s Lou Pecora referred to the combined challenges as “the Wild West,” while his colleague, Hanzhi Tang, said one of the looming goals for VR content creators is to figure out how to make CG characters react to viewers in the virtual environment as they strive continually to bring the film and game paradigms closer together. Meanwhile, a panel a day later was on the business challenges of the ongoing rollout of virtual reality to consumers. There, speakers repeatedly emphasized that while entertainment and gaming applications remain a major focus, in fact, VR holds significant promise for designers of automobiles and other kinds of equipment, and the medium also offers tantalizing possibilities for social media interactions. However, the requirement that users utilize goggles, gloves, helmets, and other equipment on their bodies continues to make selling VR to consumers an ongoing challenge. 

Group of Lenses, Working Together
A recent article on the Pro Video Coalition site by camera design expert Matt Whalen suggests that digital photography’s rapid advance has caused some in the camera design community to consider changes to conventional thinking where optics are concerned. The article states that because digital image sensor technology has advanced so rapidly, traditional optics are no longer keeping pace. Therefore, image sensor manufactures are producing chips that can record images with pixel arrays at such a deep and sophisticated levels that they require extremely large and expensive lensing systems to demonstrate their full value. In fact, he points out that full, highly-advanced, zoom lens packages from major manufacturers designed for high-end professional photography or cinematography can cost between $70,000 and $90,000 in certain cases, compared to the cost of purchasing an actual Arri Alexa camera body—one of the most sophisticated digital cinematography cameras available—which can retail for sometimes half the cost of some of those lens packages. For this and other reasons, Whalen says some in the industry are “re-thinking how we capture photographic images. Instead of trying to make bigger lenses with higher spatial resolving power, why not make groups of smaller lenses and sensors working together to form high resolution aggregate images?” In particular, he points to the so-called Multiple Mirror Telescope Solution as one methodology being pursued—essentially the notion of matching multiple lenses with multiple sensors as some low-end consumer devices have done, with each small lens paired with its own image sensor, and then having software combine the images together. Whalen points to a couple of companies pursuing this approach, plus Duke University’s AWARE project, partially funded by the Defense Department in hopes of creating a small, automated, surveillance camera system that can capture images with up to one billion pixels in them. An early prototype, first demonstrated publically in 2012, used 98 micro-cameras, each paired with a 14-megapixel sensor, grouped around a shared spherical lens. Development is reportedly ongoing.    
 
Digital TV Trends
TV Technology recently reported on a couple of interesting trends in the ever-changing broadcast world. First, the site reported on a recent study conducted by consulting firm Futuresource, which suggests that while over-the-top (OTT) services continue to grow in availability and popularity with consumers, so-called “live TV” (meaning both pay TV and free programming) continues to be the preferred viewing choice of most viewers in major territories, including the U.S., Canada, the U.K., Germany, and Australia. The Futuresource study, dubbed “Living with Digital,” suggests consumers around the world are still comfortable paying for premium tier pay-TV subscriptions for things like movie and sports packages. Overall, 65% of people in the five aforementioned nations still prefer live TV to OTT streaming services like Netflix, Amazon Prime, and others, although subscriber rates for those are rising, and in many cases, consumers are using various combinations of both live TV and OTT services, as evidenced by the study’s mention of 69% of respondents using a so-called connected TV to access streamed video or music at least once a week. The other interesting trend reported by TV Technology was a news item stating that the move toward making possible the transmission of enhanced 4K/UHD signals to consumer homes is continuing to pick up steam with the recent announcement by DirecTV of a new 4K set-top box called the 4K Genie Mini that permits compatibility of its content with 4K televisions. The article says the box, “about the size of a paperback book,” offers 4K capability, plus support for Dolby Digital Plus audio decoding, and is designed to be compatible with newer 4K TV’s. However, the article also indicates that the number of TV’s fully compatible with DirectTV’s 4K specifications are few right now, largely because their content partners require their 4K material to be accessible only by TV’s that support the latest HDMI 2.0 specifications and HDCP 2.2 content security, as well as supporting 60 frames/sec.