SMPTE Blog

SMPTE Blog

Erminia Fiorino
Thursday, January 16, 2020 - 08:29

Most people don’t consider the factors involved in creating media content when they sit down to watch a movie or view a video on their smartphone. Their goal is to be entertained or informed. No matter the reason, many spend a lot of time doing it – adults in the United States spend an average of eight hours and 47 minutes per day watching screens. It doesn’t matter whether these screens are on tablets, smartphones, PCs, multimedia devices, video game consoles, DVDs, time-shifted television or live TV.

Sight is one of our five key senses. It enables us to consume information and filter it to develop our understanding of the world around us. Approximately 90 percent of information transmitted to the brain is visual, and visuals are processed 60,000x faster in the brain than text. Humans’ visible spectrum falls between ultraviolet and red light, and scientists estimate that most people can distinguish up to 10 million colors.

These factors, of course, affect our media viewing experience. With the advances in digital technology, media organizations are having to focus not just on what content to disseminate to consumers but also how to ensure it fits the way they watch it. For example, take into consideration these facts:

●      The human visual system works hard to avoid misalignments. However, even if both eyes (or cameras) are perfectly aligned, vertical disparities between the retinal (or filmed) images can still occur.

●      The eyes move partly to minimize retinal disparities; vergence eye movements work to ensure the lines of sight of the two eyes intersect at a desired point in space.

●      It is customary to depict rapid motion smoothly to transmit from 25-30 complete pictures per second.

●      Each picture is analyzed into 200,000 or more picture elements to provide detail sufficient to accommodate a wide range of subject matter.

●      The successive illuminations of a picture screen should occur no fewer than 50 times per second, approximately twice the rate of picture repetition necessary for smooth reproduction of motion.

Breaking Down the Basics

According to research from Vanderbilt University, mental imagery directly impacts our visual perception and leads to a short-term memory trace that can bias future perception. So, what exactly is visual perception? The Interaction Design Foundation defines it as the process of absorbing what one sees, organizing it in the brain and making sense of it. A study at MIT found that the process can take only 13 milliseconds. It’s not the same as visual acuity, which is how clearly a person sees (i.e. 20/20 vision). Perhaps not surprisingly, basic visual perception can be influenced by cultural socialization.

The Brain Recovery Project promulgates that visual perception is made up of a variety of different parts. These include:

●      Visual Closure: Knowing what an object is when seeing only part of it.
●      Visual Discrimination: Using eyesight to compare features, like color and shape, from one to another object.
●      Figure-Ground Discrimination: Differentiating a shape or word from its background.
●      Visual Memory: Recalling something seen recently.
●      Visual Sequencing: Distinguishing the order of numbers, letters, words or images.
●      Visual Spatial Processing: Understanding how an object’s location relates to you.
●      Visual Motor Processing: Using the eyes to coordinate body movements.  

Viewing Obstacles

Unfortunately, there are challenges to visual perception for some people. These come in the form of visual stress and color vision deficiency often called color blindness. Present in an estimated 40 percent of poor readers and 20 percent of the general population, visual stress is a perceptual processing condition that can cause headaches and other problems. In fact, those prone to it may experience seizures when striped patterns (at approximately three cycles per degree) are shown at a flicker rate of about 20 hertz.

Color vision deficiency, which is experienced by an estimated eight percent of men and fewer than one percent of women, a deficiency in color vision. Red/green color blindness is the most common form.

SMPTE® – Education and Engagement

Our mission at the Society of Motion Picture and Television Engineers (SMPTE) is to drive the quality and evolution of motion pictures, television and professional media. We hope to provide relevant education to our global society of technologists, developers and creatives, along with setting industry standards and fostering an engaged membership community. By sharing information on topics like this, we aim to continue creating value for our members, partners and industry.

 

3983
Erminia Fiorino
Thursday, January 9, 2020 - 09:09

A report by the World Economic Forum listed the most important consumer, ecosystem and technological trends driving digital innovation in the media industry as demographics, new consumer behaviors, and expectations, ecosystem challenges, technology trends and adoption of digital technologies. A subset of these trends is real-time content management, in which media organizations not only offer content but also complementary services to support it. However, the traditional monolithic model employed by some such organizations doesn’t provide the necessary scalability to deliver this amount of content to consumers.

Enter microservices. As we mentioned in a previous blog, microservices can be described as an architectural style in which a larger solution can be broken down into granular functionalities to provide performant business value. It results in the smallest service or process of business capabilities that focus on a single, thoroughly defined task and is a variant of service-oriented architecture (SOA).

Applications and Advantages of Microservices

As components of a sizable software application that are able to be disconnected from the application as a whole, microservices may be revised or updated separately from the rest of the application. This technique has been utilized by enterprises in other industries and can certainly be implemented in the media and entertainment industry to enable an expedient and productive response to changing consumer demand. Standardization within the Society of Motion Picture and Television Engineers (SMPTE®) can support the use of microservices to provide a wide array of advantages for the media and entertainment industry.

In fact, microservices are already being used in the industry for a variety of workflow functions, including video on demand (VOD) delivery, fast live to VOD, graphics and subtitle insertion and transcoding new content. Media organizations also can utilize it to quickly and easily fix bugs, eliminating frustrating service delays. Additional benefits of microservices include:

·         Increased scalability, resiliency and portability

·         Easier deployment

·         Enhanced test coverage

·         Continuous innovation

·         Improved software commonality

·         Increased ability for automation

·         Quicker creation of new services

·         Elimination of operations silos

Combining Microservices and the Cloud

Many large and successful digital enterprises have utilized microservices to grow their business. One of the first was Discovery Communications. Others include Amazon, Netflix, and some additional pay television operators and broadcasters, satellite media service providers, non-DOCSIS cable enterprises and non-media businesses such as Uber and Airbnb. Another commonality of these enterprises is their use of a cloud-based environment.

Technically, microservices are a subset of the cloud, and a model that operates with microservices can be applied to either a private or public cloud environment. As Videonet notes, some enterprises use a mix of traditional private cloud (on-premise or off-premise) and public cloud for their applications and a mix of ‘cloud-enabled’ and ‘cloud-native’ (microservices-based) software in their data center video operations. Conversely, some businesses select cloud-enabled and private, while others choose cloud-native and public.

If you’re not familiar with cloud-enabled versus cloud-native, we’ll give you a basic comparison. Imagine Communications defines cloud-enabled applications as those made accessible from the cloud but lack the ability to tap into the full benefits of a distributed environment. Cloud-native services are applications that have been constructed using microservices-design principles and are optimized to take advantage of the Internet and other IT-based cloud environments.

As one of the key cloud-native services, microservices affects how applications are created and deployed. Though the terms are almost interchangeable, cloud-native computing distributes applications as microservices through an open-source software stack. It then places each part into its own container and organizes them to enhance the utilization of resources.

Comprehensive Benefits of The Two Technologies

Now that you (hopefully) have a better understanding of microservices and cloud-native software and applications, let’s dive into some of the benefits of this comprehensive combination. One example is that microservices as a cloud-based resource are able to be deployed part-by-part or in bigger volumes. Another is the view by multiple leading television vendors that microservices-enabled, cloud-native software is superior and that the true potential of the cloud can only be exploited through a microservices architecture.

Additional advantages of cloud-native software and microservices are increased agility for faster software development with a decreased risk, the creation of more flexible workflows and an enhanced ability to test them, quicker and more regular updating of features and improved competition between large and small media and entertainment providers. Similarly, this combination allows for quicker instigation of new services, optimized realization of advantages from virtualization in the cloud, maximized resiliency and manageability and better use of computing, storage, and networking resources to decrease operating costs. According to a study from Amdocs, a select number of communications and media providers are rapidly adopting cloud-native technologies, giving them the first-mover advantage and greater opportunities for innovation.

SMPTE: Supporting Microservices in Multiple Ways

At SMPTE, we know that microservices are the building blocks upon which the industry’s most successful digital businesses are constructed. That’s why we offer multiple tools, from recommended practices and technical specifications to engineering guidelines and registration services, to promote interoperability for microservices. If you’d like to learn more about how the media and broadcast industry can benefit from adopting this new technology approach, attend our upcoming Section Meeting or contact us online.

To learn more about microservices and the SMPTE Drafting Group on Media Microservices Overall Architecture, check out the November/December 2019 Motion Imaging Journal by SMPTE.

3978
Erminia Fiorino
Monday, November 25, 2019 - 08:10

Video Compression Terminology Explained

It’s easy to get bogged down in all the technical terminology when it comes to chatting about the wizardry behind the lenses of TV shows and movies. Lossy this, frame rate that — after a while, trying to understand the differences can be confusing, especially if you’re new to the industry.  When applied to video compression, that terminology gets even more complicated. If you’re one of those people scratching your head at the technical terms, we’ve got you covered.

Below is a glossary breaking down basic video compression phrases, so next time you can hop into those technical discussions with ease.

Video Compression Terminology Explained

Codec

With the advent of ultra-high-def and wide-color-gamut, video file sizes are much larger than ever before. To offer faster transportation of data and a seamless playback experience, file types must be made smaller. That’s where codecs come in.

The word “codec” is a combination of two terms: coder and decoder (co/dec). It refers to a device or software that encodes data (usually compressing it for storage), then decodes (or decompresses) that data for playback or editing.

There are several different types of codecs in the industry — some are royalty-free, while others require licensing. The most common types of codecs are:

HEVC
AV1
VVC
VP9

Each has pros and cons, so the ultimate determining factor on which one to use depends on what’s needed for a specific project.

Containers

Containers, also known as “formats,” refer to how something is stored (not how something is coded). They contain all the compressed metadata — audio and video — that was encoded using a codec, and are represented by different file extensions (i.e. .MP4, .MOV, etc).

To help put it into perspective, an article by api.video offers a great analogy:

Imagine a shipping container filled with packages of many types. In this analogy, the shipping container is the format, and the codec is the tool that creates the packages and places them in the container.

Containers are responsible for the “packaging, transport, and presentation” of the information.

Frame Rate

The frame rate is the number of video frames displayed in a single second. The more frames shown per second — or the higher the number — the more realistic the visual quality is. High frame rates look best with sporting events or action-based visuals. On the other hand, the lower the number, the more “cinematic” a video feels.

Bit Rate

Generally measured in kilobits per second (Kbps) or megabits per second (Mbps), the bit rate is the amount of data a file uses per second to store audio-visual information. A higher bit rate contains more data, which means it takes more power to process and requires a faster internet connection for playback. In general, a higher bit rate also means better quality files.

Lossy Compression

Lossy compression is one of two main types of compression formats. It reduces file sizes by permanently deleting unnoticeable data — usually redundant information. This process allows for increased storage space for additional file types, however, once done the data cannot be restored in its original form.

Lossless Compression

Lossless compression is the other main type of compression format. With lossless compression, the technique minimizes the file size without losing quality — data is “rewritten in the same manner as the original file.” While the file size reduction is less than lossy, original data can be recovered if necessary.

While there are many terms out there, understanding these basic phrases will help lay the foundation for the more complicated industry jargon to come.

Interested in diving deeper into all things video compression? Take a look at the November/December SMPTE Motion Imaging Journal!

3941
Erminia Fiorino
Monday, September 9, 2019 - 11:23

As our digitally connected world continues to advance, it’s not without setbacks for many industries. Trust in social media continues to decline, content piracy is an ongoing issue, and the inability to properly track royalties across streaming services is growing. However, one system that many in the media and entertainment (M&E) industry believe is worth exploring to restore trust and mitigate (or completely resolve) these problems is blockchain.

Before highlighting how blockchain could change the game for M&E, let’s go over some quick basics.

The Basics of Blockchain

Blockchain is a secure, encrypted, digital distributed ledger that is chronologically time-stamped with unalterable data. Rather than being owned by a single entity or central authority (like a bank), it’s decentralized and shared with all parties in a given network. This allows peer-to-peer exchanges of value. Any transaction made is recorded, verified through a consensus method, and viewable by all participants.

Each “block” refers to a specific data structure — meant to keep all related transactions distributed to all users or computers (nodes) — within the network. The “chain” refers to an arrangement of blocks in a specified order.

Within each blockchain, there are three key components that make up the highly sought-after concept.

Three Key Components

1. A Peer-to-Peer Network of Nodes
As mentioned previously, rather than having a centralized authority where information is controlled by a select few, blockchain operates under a decentralized, peer-to-peer network. Every participant in the blockchain network is in charge of maintaining, updating, and auditing the system, ensuring validity and security.

2. Cryptography
To anonymously secure and verify transactions (blocks), cryptography is used. Once a block is cryptographically sealed, it can no longer be copied, edited, or deleted. The data becomes immutable.

3. A Consensus Model
The consensus model explains how a blockchain is governed and operated, listing rules under which a new block can be added to the chain. Many different models exist, but the most common are Proof of Work, Proof of Stake, and Proof of Authority. These characteristics are especially favorable for the M&E industry because of the promise of data security, trust, and the ability to shift power back to copyright holders. So, here’s how it works.

How it Works

Once a blockchain system has been created, a transaction can be requested. After a request, a block representing the transaction, or data, is developed. From here, the block is broadcasted to the peer-to-peer network for verification. Once the block is verified, it gets added to the chain, sealing the data so it can’t be altered. Finally, the transaction is complete.

This decentralized, secure, immutable network poses many advantages for M&E.

Benefits for Media and Entertainment

With blockchain, a world of new possibilities could be opened for M&E. For example:

  • Reduced Content Piracy — Offering content directly via the platform would allow for secure tracking
  • Smart Contracts — These contracts would remove third parties, and be trackable and irreversible
  • Improved Royalty Distribution — Blockchain would create precision and transparency while facilitating quicker payouts through smart contracts
  • New Pricing Models — Consumers could make micropayments to pay-per-use for a single season of a TV show, instead of a monthly over-the-top (OTT) subscription

However, blockchain is still in the early stages of development and it would be some time before it could integrate with existing structures.

What do you think, will blockchain be a game-changer for media and entertainment?

Some information in this blog was derived from the September Motion Imaging Journal Article, ‘Blockchain in Media — A True Calling?’ To read it, get your hands on this month’s journal!

3877
Wednesday, August 14, 2019 - 12:48

From wider color gamuts to impeccable resolutions, there have been evolutionary advancements to image quality and consumer electronics. While these have added an exciting element to what is seen on screens, the improvements haven’t changed the way TV is watched. Broadcasters are still restricted to a single 16:9 box frame.

In our hyper-connected world, how do we compete with second screens and engage the tech-savvy viewers of today? Some say the answer lies in object-based broadcasting.

From Traditional to Object-Based Broadcasting

Object-based broadcasting (OBB) is the idea of capturing media and breaking it down into individual objects (i.e. audio, video, communication channels, captioning, etc.) comprised with detailed metadata. When broadcasted, consumer devices (i.e. smart TV, phone, tablet) can then reassemble the information according to their distinct requirements using the accompanied metadata.

Essentially, TV programming could go beyond the constraints of a single stream and change depending on the device, environment and consumer preferences to broadcast more flexible, captivating multi-screen experiences.

However, it’s significantly more complex to execute compared to traditional broadcasting—where media is captured and compiled into a linear segment, displaying the same information across devices (regardless of their individual specs). Though, the upfront labor associated with OBB has the ability to produce significant advantages from the outset.

Benefits of OBB

The elasticity of OBB gives more control to both the viewer and the broadcaster, offering benefits for both parties. Below are just a few examples.

For the Broadcaster

●       Targeted ads

●       Automated soundtrack switching based on geographic licensing restrictions

●       Enhanced storytelling

For the Viewer

●       Enlarged on-screen graphics for individuals with low-vision

●       Swapping to sign language presenters for the hearing impaired

●       Automated “catch-up” videos to update a consumer on their favorite show

Despite these benefits, finding ways to integrate the new tools with current workflows has been a challenge, making progress difficult over the years.

However, recent advancements with consumer electronics, mobile networks and internet protocol (IP) systems have allowed some forms of OBB to be tested in the last few years. In fact, a three year collaborative project known as 2-IMMERSE commenced near the end of 2018 and exhibited promising growth for the industry.

2-IMMERSE

Funded by the European Union, the 2-IMMERSE project created an open-source platform for object-based, multi-screen entertainment to advance the development of customized experiences. It utilizes a containerized, microservices approach that makes the software lightweight and scalable.

To show how the software could be used, the team created a set of situational experiences to turn into prototypes. Here are just two of the innovative examples:

Note: These are brief descriptions of the possibilities, you can view the highly detailed versions on the 2-IMMERSE website.

1. MotoGP at Home

Imagine a father and son who want to watch a MotoGP race together; the father is an avid fan and the son, a novice. They both turn to the TV displaying the main race and leaderboard, with their own second devices (linked to the TV) in hand. Different data feeds, camera views and videos can be pushed to the second device.

Since the dad is immersed in the sport, he selects the live feed from the on-board bike camera of one of his favorite racers to view on his tablet, in addition to the main screen. The son doesn’t know any of the riders, so he chooses to view the list of general information about each competitor on his smartphone, where the riders shown on the main screen are highlighted. The media on both the tablet and smartphone are scaled to size thanks to the individual packaging and metadata of the objects.

2. Soccer at a Bar

Picture a group of friends arriving at a bar to watch a soccer game. Once inside, they join the bar’s wifi and link their phones to the connected TV. For customers to feel more immersed in the action, the owner has chosen to increase the crowd volume of the game and remove the commentator audio. One friend arrives late and misses a goal, so someone in the group uses their companion screen to replay the action, selecting the camera angle from which it’s viewed.

While these may simply be prototypes of how OBB for multi-screen viewing could be used, 2-IMMERSE did a real trial and evaluation of a MotoGP experience—including assessments from 93 individuals. The results and consumer feedback were overwhelmingly positive. It seems OBB for common use is closer than ever, nearing reality more and more everyday.

See the results of the MotoGP trial and dig into the technical specs of the 2-IMMERSE project in the August edition of the Motion Imaging Journal. 

 

3861
Wednesday, August 14, 2019 - 12:18

If the broadcasting industry is to remain competitive against the myriad of streaming solutions, a more agile and flexible architecture is needed to support the move to internet protocol (IP)-based systems for improved interoperability and cost. The traditional monolithic model simply can’t scale efficiently at the volume needed to convert and deliver content to demanding consumers.

The alternative? Microservices

Microservices Defined

Coined in 2011, microservices can be described as an architectural style in which a larger solution can be broken down into granular functionalities to provide a performant business value. It results in the smallest service or process of business capabilities that focuses on a single, thoroughly defined task—and is a variant of service-oriented architecture (SOA).

Breaking a larger process into smaller, well-defined basic elements has fundamental benefits for the media and entertainment industry

The Benefits of Microservices

Of the many advantages microservices offer, we’ve compiled the top three:

1. Scalability

The most crucial reason to adopt microservices is the ability to scale up and down as needed to offer extreme flexibility. The uniqueness of cloud-based computing lies in its lack of limitations to a single physical machine, meaning no extra hardware is required to scale up (though a virtual machine would be needed).

TVB Europe poses a great example of how increased elasticity could play out for the industry:

“For example, if a streaming service needs to process and distribute a high-priority, next-day-air episodic title, the elasticity of microservices allows them to easily scale resources on-demand to ensure the title is processed and delivered within a limited time window.”

2. Resiliency 

You know what they say about putting all of your eggs in one basket. Microservices are in smaller, more granular workflows by design to divide your proverbial eggs across multiple baskets to improve resiliency.

Unlike a traditional monolithic approach—-where a single failure in the process can disrupt the entire operation—-an issue to a specific microservice doesn’t impact the entire architecture and can be repaired quickly. Essentially, it protects organizations from losing complete service capabilities.

3. Continuous Innovation

By breaking down systems into smaller tasks focused on a single functionality, engineers are able to make continuous updates, enhancements and bug fixes simultaneously without disrupting other service-chains. This leads to reduced modification time, increased overall efficiency and peak performance results.

Additionally, new features can be pushed out quicker—-no more waiting for the rollout of a large system upgrade. The latest standards, such as ATSC 3.0, can be incorporated into the workflow as soon as it’s made available.

While these benefits are significant, there are a few challenges that come with switching to a cloud-based microservice system.

The Challenges of Microservices

Transitioning from Monolithic Models                                                                         The large majority of media companies emerged prior to digital alternatives, which means a migration of applications from the monolithic model to microservice-based operations will be needed. By and large, this is not an overnight process and will require guidance from technology suppliers.

The “Grains of Sand Pitfall”

Noted as one of the biggest challenges for engineering organizations to overcome when embracing a microservice architecture is what’s known as the “grains of sand pitfall.”

When services get too fine-grained, they become difficult to manage and support. It’s important for each service to perform a specific function and focus on the business capabilities.

Currently, there is no standard outlining basic needs to enable interoperability among vendors within our industry. Standardization would allow users to cherry pick the best services from vendors to build complete systems, suiting the current needs of the user with the ability to modify in the future. Fortunately, a SMPTE Drafting Group has formed to spearhead this initiative.

To learn more about microservices and the SMPTE Drafting Group on Media Microservices Overall Architecture, check out the August Motion Imaging Journal by SMPTE.

 *Information in this article is largely derived from the August Motion Imaging Journal article, “Good Things Come in Small Packages: Microservices for Media and the Need for Standardization,” by Chris Lennon.

 

 

 

3860
Erminia Fiorino
Tuesday, July 2, 2019 - 09:35

The transition to internet protocol (IP) networks has been underway in the broadcast industry for some time. However, only recently have standards been defined that allow true interoperability and efficiency that make the switch from SDI (serial digital interface) to IP a feasible option for migration. As such, distribution is now rapidly evolving from the traditional digital interface to IP-based systems.

Before we get into the benefits of IP, let’s briefly go over SDI. 

What is SDI?

SDI—or serial digital interface—was first standardized by SMPTE in 1989, marking a revolutionary transition from analog to digital video infrastructure. At its core, the system is used for the transmission of uncompressed, unencrypted digital video signals for professional broadcasting. Widely adopted among industry professionals, an SDI router was designed to:

  • Deliver serial digital input signals in a point-to-point, one-way structure.
  • Provide consistent delay and performance, regardless of the number of active signals being routed and switched.

Since its inception, SDI has developed a suite of encompassing standards that support the creation of higher resolutions, color gamuts, and frame rates to meet the needs of the industry and advancing consumer technology.

However, the need for increased bandwidth for the transmission of ultra high definition (UHD) content and better connectivity, combined with the growing over-the-top (OTT) market, simply outweigh the capabilities of what SDI can offer. Due to this, there’s been a shift to internet protocol (IP)-based networks for all-IP studio operations. 

The Move to Internet Protocol-Based Infrastructure

IP networks utilize internet technology ecosystems that offer more agility and flexibility for broadcasters. In the long run, this means decreased costs, improved bandwidth consumption at higher bit rates, and improved overall efficiency. Additionally, IP switches are bidirectional and can manage multiple input and output ports, unlike SDI.

A few other benefits of migrating to an IP-based system include:

  • Cloud-based computing requiring less physical storage
  • One common data center
  • Higher quality streams of live content
  • Future-proofing for scalability and bandwidth

The transition doesn’t come without skepticism, though. The newness of all-IP requires extensive changes in infrastructure that make the decision a costly one for many local and regional broadcasters. There is also a concern of security for housing valuable content in the “cloud.”

Despite these concerns, SMPTE ST 2110 for Professional Media Over Managed IP Networks has gained widespread acceptance within the industry.

SMPTE ST 2110 for Interoperability

SMPTE ST 2110 consists of a suite of standards that detail the “carriage, synchronization, and description of separate elementary essence streams over IP for real-time production, playout, and other professional media applications.” Additionally, ST 2110 is video-format-agnostic, so it can support UHD such as 4K, 8K, and other emerging formats.

Note: For a full list of the included standards and enhanced details about the SMPTE ST 2110 family, visit the SMPTE ST 2110 FAQ page.

The most important thing to note about ST 2110 is its ability to separately route and break apart audio, video, and ancillary data to work on different streams independently, then bring them back together again at the endpoint. Not only does this enable truly flexible workflows when you consider the ethernet’s asynchronous nature, but it also simplifies the process of adding metadata, captions, time codes and more.

What are your thoughts on the transition to all-IP? Share in the comments!

For in-depth details on the technical aspects of IP-based networks, enhanced scalability, and evolving distribution as a whole, take a look at the July Motion Imaging Journal by SMPTE. 

 

 

3846
Erminia Fiorino
Tuesday, July 2, 2019 - 09:27

As viewers’ video consumption habits continue to evolve with over-the-top (OTT) services and broadband distribution, the future of broadcast looks to offer a more hybrid solution for consumers.

Over the past few years, the Advanced Television Systems Committee (ATSC) has been working on a new broadcast system, ATSC 3.0, which will be the first of its kind. A major upgrade to over-the-air (OTA) television, this system will be designed to fit the needs of next-generation TV.

Based on the article, ATSC 3.0 Standards Usher in Next Gen TV Era by ATSC President Madeleine Noland in the July 2019 SMPTE Motion Imaging Journal, let’s dive into what viewers can expect from the future of TV.

What is ATSC 3.0?

ATSC 3.0 is the latest broadcast emission system by ATSC defining the future of TV standards, including sweeping innovations in contrast to the current adoption (ATSC 1.0). The idea was to combine broadcast (OTA) and broadband (OTT) service offerings. In theory, it enables distribution to your phone, laptop, tablet and car in addition to your in-home entertainment. The physical layer transmission will offer a 30 percent higher capacity than ATSC 1.0.

All in all, this new system should provide a transformative experience for viewers and allow broadcasters more freedom and creativity. Keep reading for a look at five noteworthy features of ATSC 3.0 to get a better idea.

Five Noteworthy Features

1. Evolvability

Due to the ever changing nature of the industry, a key component of ATSC 3.0 is its ability to evolve in the future. The mechanism enabling this is called bootstrap, a powerful point of entry that signals theoretical information (i.e. signal bandwidth) of a “waveform to the top of hierarchical parameters enabling decoding of physical layer frames.”  In simpler terms, it should be capable of including extensions such as 8K distribution later on.

2. OTA/OTT Hybrid

ATSC 3.0 will be the first TV broadcast standard to use an Internet Protocol (IP) transport system (the language of the world-wide web) rather than MPEG2-TS, merging broadcast and broadband. Essentially, it combines OTA signals with a viewer’s home internet for a hybrid approach, allowing easier access to live content and a more personalized, diverse content selection

3. Enhanced Picture Quality

The current ATSC 1.0 standard can only go as high as 1080p (Full HD). However, version 3.0 will be capable of distributing ultra-high-definition (UHD) in 4K, which is four times the resolution of Full HD. It will also include image enhancements such as high dynamic range (HDR), wide color gamut (WCG) and a high frame rate (HFR).

4. Immersive Audio

The move from the current standard to 3.0 means the industry is no longer limited to right and left speakers or surround sound. Audio features will be more realistic and lifelike, with the ability to support object-based, three-dimensional sound such as Dolby Atmos or MPEG-H.

5. Advanced Emergency Alert Information

This next-generation TV update opens the possibility for advanced emergency alert information through broadcast and/or broadband. The types of media that could be pushed include evacuation maps, images associated with AMBER alerts, weather radar maps, user-generated videos and possibly even the ability to “wake” sleeping devices during dangerous situations.

The only downside is that ATSC 3.0 is not backward compatible, meaning an external converter box or ATSC 3.0 capable TV will be needed to use these features. However, a new study in which consumers were able to see and hear how ATSC 3.0 could enhance their listening and viewing experiences shows promising signs of adoption regardless of the compatibility caveat. A whopping 80 percent of consumers said they were either “interested” or “very interested” in purchasing an ATSC 3.0 capable TV or add-on device, so clearly viewers are attracted to the features despite the minor stipulation.

For more technical, in-depth specs on ATSC 3.0, check out the July Motion Imaging Journal by SMPTE.

3845
Erminia Fiorino
Thursday, June 6, 2019 - 09:05

Those futuristic holographic projections seen in an array of your favorite science fiction films—from Star Trek to Iron Man—may be near reality. If the movies translated to real life, the projections would be using a “light-field (or holographic) display”.

Bold creators, engineers, and scientists have been nose deep in developing and testing the technology that would bring true holographic video to life. As one could guess, creating photorealistic, floating images up to par with expected standards is notoriously difficult.

New to the idea of light fields? Don’t worry, we’ll get you up to speed!

What Are Light-Field Displays?

Coined by physicist Andrey Gershun in 1936, a “light field” can be defined as a vector function that “totals all light rays in 3D [three-dimensional] space, flowing through every point and in every direction.” So, a light-field display (LFD) is what would reproduce a window-like, 3D aerial image, without the need for glasses or headwear—allowing the consumer to view the visual representation from any perspective.

From Science Fiction to Reality

The idea itself dates back over a century ago to the concept of Integral Photography by Gabriel Lippmann. He proposed the initial method of light-field capture to create images that would act more like a window. Lippmann’s forward thinking precedented the development of the hologram by Dennis Gabor in 1948.

While both were astonishing discoveries and accomplishments for their time, the resources available made it difficult to truly breathe life into realistic, perspective-shifting holography.

Some have experimented with the light field since then to bring the coveted sci-fi experience to fruition, but not without roadblocks. Though it seems that 2019 is finally the year for light-field displays to move beyond science fiction. If so, LFD’s have some pretty amazing benefits for the viewer and the industry.

Light-Field Display Benefits

The main and most obvious benefit that LFD’s have to offer is enhanced storytelling through immersive media. Can you imagine watching a holographic light-field cinema in your living room and feeling like your favorite actor is actually there with you?

It would be the richest, most realistic experience viewers could get—a 100 percent representation of filmed content in high resolution. It also would allow the creator to change the integral features of an image after it’s been taken. For example, adjustments to the focus or the creation of 3D pictures from one exposure would both be possible.

Even as we’re on the cusp of such revolutionary technology, there are already some emerging LFD projects in the works.

Emerging LFD Projects

Just two of the most promising innovations currently being developed are by Light Field Lab and Draper:

Light Field Lab

Light Field Lab is a company dedicated to building a holographic ecosystem. Recently, they embarked on an ambitious mission to bring the famed Star Trek Holodeck to life in partnership with a graphics software firm, Otoy. To do so, they’ll be using holographic displays that don’t require headgear for viewing and Otoy’s open-source, royalty-free media rendering format for real-time graphics. Their goal is to bring this experience to professional and consumer markets.

Draper

Engineers at Draper, a research and development organization, designed a light field projector module (LFPM) using “surface acoustic wave (SAW) optical modulators mounted on a printed circuit board and illuminated with precise fiber-optic arrays.” If all goes well, it would provide a truly holographic video experience with technology that supports commercial use.

With that in mind, the Immersive Digital Experiences Alliance (IDEA) has already formed and launched at this year’s NAB Show. It anticipates releasing the first draft of an immersive media

3826
Erminia Fiorino
Monday, June 3, 2019 - 11:43

In our everyday lives, we experience sound in all distances and directions. Our ears are able to detect and differentiate between tone, pitch, volume and location. In short, we hear sound three-dimensionally.

Historically, sound for broadcasting and cinema has been produced two-dimensionally. Up until recently, the ability to record and mix using what’s known as immersive audio (i.e. spatial or 3D audio) was only a discussion among techies.

Before we explore the next generation of sound further, let’s go over the basics.

What is Immersive Audio?

Immersive audio mimics a true-to-life experience by allowing the viewer to hear in all directions. It does this by adding a third dimension of height-sounds overhead and can involve at least three parts: channels, ambisonics, and audio objects.

Channels

An audio channel can be defined as a collection of audio samples coming from or going to a single loudspeaker or group of loudspeakers. A typical home cinema setup usually includes a 5.1 system to manage the “front, left, right, left rear, right rear, and a subwoofer.” And with immersive audio, it can now mimic a higher array.

Ambisonics

Ambisonics produces audio in a full 360-degree acoustic space that moves with the person. For example, if a listener moves their head in a given direction, the audio will change to “reflect the movement.”

Audio Objects

An audio object consists of one audio source accompanied by metadata which conveys the spatial location of a specific sound. You want to be able to reproduce the sound in a given direction.

The results are impressive, yet the broadcast industry has been slow to adopt the next generation format—even with the increased use across virtual reality and on the silver screen. Perhaps the only exception would be with sports broadcasting, which has been making strides to improve realism with these enhancements.

Sports Broadcasting and the Next Generation of Sound

It’s been noted that sports broadcasting has been considered the only suitable case for using immersive audio. Other entertainment and television producers just aren’t seeing the value of it yet. Even if sports broadcasters are willing to take the plunge, there are still some hurdles to overcome.

It can be difficult to capture sound from high-speed objects and a strenuous manual process for the sound supervisor. However, new object-audio capture[1]  equipment and automated workflow systems are being developed; others have already hit the market to tackle these challenges.

In fact, two recent sports broadcasting examples utilizing immersive sound include the National Hot Rod Association (NHRA) and SBS:

1. Dolby Atmos and the NHRA

In 2017, the NHRA looked to Dolby to help bring its televised presence to life, offering a more immersive experience for those at home. After a year of planning, site visits, audio capture, strategizing and testing, a Dolby Atmos system for live viewing was created using a hybrid production strategy. Since then, it’s been implemented for broadcasts.

2. SBS and the 2018 FIFA World Cup

SBS, the South Korean television and radio network, transmitted ultra high definition coverage of the 2018 FIFA World Cup with immersive sound to avid soccer fans watching around the world. Based on the ATSC 3.0 standard, they used Fraunhofer’s MPEG-H audio format—making it the “world’s first regular broadcast of immersive and interactive audio powered by MPEG-H.”

Immersive audio has improved exponentially over the last few years to envelop viewers in hyper-realism. And, it won’t be slowing down any time soon.

Smart TVs are beginning to incorporate immersive sound capabilities, and soundbars are becoming more readily available in the home. As more immersive content arrives and broadcasts are mixed with spatial audio, it will only fuel the growing fascination.

Do you think immersive audio should be more widely adopted? Comment below!

To learn more about immersive media, check out the June 2019 edition of the SMPTE Motion Imaging Journal.

3823

Pages