<img height="1" width="1" src="https://www.facebook.com/tr?id=414634002484912&amp;ev=PageView%20&amp;noscript=1">
Donate
Check out SMPTE at NABSHOW24
Donate

The Psychology of Immersive Media

October 29, 2020

The rise of immersive media technologies and virtual reality entertainment concepts has led to the media industry’s increased focus on trying to understand the mysteries of human perception. Reams of research and experimentation in this arena has led many industry experts to explore this issue, and Scott Daly, a bioengineer on Dolby Laboratories technical staff, is among them. Daly emphasizes that the most basic thing he has learned is that one must first understand what, exactly, “immersion” means in entertainment.

 “In common language use, it’s [not very specific],” he says. “After all, one can become ‘immersed’ in a book. The general term doesn’t refer to any specific technology. And [even in the entertainment space], you can get ‘immersed’ in various ways—head-mounted displays or CAVE [Automatic Virtual Environments], for example. Plus, you can have immersive audio, whether or not [the picture] is immersive or even present."

 “However, we don’t have a good replacement for the term. There is ‘engagement,’ which some people prefer. That’s a mixture of how much you are paying attention to something and how much it might be affecting you emotionally—how invested you are in the content. But that term has problems because many people use that term to refer to how long you're willing to wait for [streaming content] to buffer, or how often you're willing to return to the same content, for example. So really, we have to sidestep simple terms and look at the psychology of immersive media, relating immersiveness to neuroscience, psychophysics, and more.”

Daly suggests that immersive experiences, such as virtual reality systems, should find ways to mimic how human brains process and become emotionally impacted by various experiences. Along those lines, the science of mirror neurons is of particular interest, especially for a research group at Dolby.

Mirror neurons are specialized brain circuits or systems that connect different sensory portions of the brain to the “premotor cortex, which helps control the body,” Daly explains. “In this case, cells in those circuits are uniquely [triggered] both when a person acts or when a person observes a particular action, thus ‘mirroring’ behavior the person has seen.” Daly adds that this ability to “connect visual, auditory, and body behavior in such a way as to enable the transfer of a mental state from one person to another can be considered so a person can be immersed in another person’s point of view as a starting point for communication.”

If virtual reality systems could replicate such an experience, Daly suggests, then those systems can have a greater emotional and communicative impact, which is, after all, the goal of any personalized entertainment experience.

“The key thing is that mirror neurons show a bi-directional aspect—the concept is mimicry,” Daly states. “In other words, mirror neurons are what allow us to step into the shoes of somebody else. That is a true aspect of immersion. In immersive media, having that experience would allow you to be literally ‘immersed’ in what is happening to another person. Not all goals for immersive media are about emotion, but the reason people are excited about the possibilities is that you can potentially experience a stronger level of emotion. Also, there is the well-known connection between emotion and memory via the hippocampus [portion of the brain], as well as emotion being a strong driver for motion, i.e., action.

“If you can immerse yourself into an experience and a place other than where you are physically, that can be viewed as a type of escapism, and no one can deny the role of escapism in entertainment. So there are different aspects involved with mirror neuron systems that reflect truly personalized immersion. Those would include identification, empathy, communication, disassociation, and, as I mentioned, escapism.”

As a way of illustrating the bi-directionality aspect of how mirror neurons impact the mind-body relationship, Daly points to a 2019 study on how the use of Botulinum toxin—commonly known as Botox—can impact facial expressions and how mirror neurons transmit emotion.

“This is the Botox for wrinkle removal—it basically deadens the neural activity of your facial muscles,” Daly explains. “The study was originally about how Botox impacts facial expressions. The traditional way of thinking about facial expression recognition is that you look at someone’s face, and it shows a particular emotion. Then, you try to figure out what that emotion is—in some cases, it’s easy. In others it’s more subtle, and then there are some expressions that kind of cross over, and you need more context to figure them out. We used to think that you gather information visually, compare it to your learned experience, and then determine the emotion. In this study, however, they also tasked subjects with identifying facial expressions, and they found that people who had Botox procedures performed worse than people who didn’t.

“This was a surprising finding in that it showed that a treatment that impacts your facial expressions also has an affect on your ability to recognize facial expressions in others.”

Daly says this illustrates that part of the natural function of the human brain, among other things gives humans the capacity to become immersed in what is happening to other people. As it relates to immersive media, therefore, this reality illustrates that the more a subject can “escape” into the experience of another person in a virtual-reality scenario, for instance, the more the content creators can elicit stronger levels of emotion from participants.

Daly points out that a virtual flood of 21st-century technological advances ranging from 8K monitors with higher dynamic range to sophisticated immersive sound systems to evolving and increasingly ergonomic VR interface tools make improved and more organic mimicry of real-world responses to content more feasible each day.

He says, for instance, that industry researchers are working hard to develop lighter-weight head-mounted display goggles more akin to the size, shape, and weight of “aviator glasses,” which would be less invasive than current goggle designs, yet “provide as wide a field of view as possible.” Indeed, he says industry professionals are currently working on the development of contact-lens displays and believes progress in that arena “has been pretty startling.”

“Of course, the idea of putting electronics in your eye with a battery—that will take a long time to get FDA approval,” he adds. “But it does illustrate the kind of innovation going on to address the [drawback] of having to wear bulky headsets. Companies will develop such things, and then consumer enthusiasm will determine if they are viable. The same idea is true of the kind of high-resolution display wall systems that even provide reactive lighting, which is being developed for immersive CAVE systems. Some of them are quite remarkable, but in that area, the prices are not yet reasonable. But the point is—such things are possible, and prices will eventually drop.”

A significant roadblock to this pursuit, however, is the ongoing reality that some neuroscience problems involving the brain’s perception system can stand in the way. Subtle or sometimes extreme discomfort can occur as the brain tries to adjust itself to the manufactured techniques required to insert a person mentally into a virtual world. This is an issue for some people even with standard 3D stereoscopic imagery and remains an even bigger challenge in the evolution of virtual reality systems.

It’s also an area that Daly has been researching, and he explains that a lot of this problem revolves around a phenomenon known as vergence-accommodation conflict (VAC)—essentially a strain on the brain when it receives mismatching cues between a 3D object (vergence) and the focusing distance (accommodation) the eyes require in order to focus properly on an object. The eyes, in other words, are required to focus at a different distance from the distance at which a static display is focused.

“Some of the newer head-mounted displays are starting to maneuver around these kinds of problems, but many people who have tried VR—focusing their eyes at one depth while their vergence is at a different depth—have experienced headaches, fatigue, and nausea,” Daly says. “And the level of self-motion can be another related problem. Even if you have a system that does not present the vergence-accommodation conflict, you can still experience strong nausea effects with motion due to another type of conflict called the ocular-vestibular mismatch. I’ve experienced that myself, using a VR system to fly objects in VR space, for instance.”

This is an area where Daly suggests that successful VR systems will need to offer personalization options for users since the discomfort level varies from individual to individual. Solutions to this challenge, he adds, involve weaving into a system tools to allow a user to have control over motion and various types of what he calls “dampening mechanisms.”

“It’s a simple analogy to how people enjoy amusement park rides,” he says. “Some people are very aggressive and enjoy dramatic rides like steep rollercoasters where they go through rapid changes in acceleration or ones with complex rotations. Others get queasy with such things. So you run into the problem with playing back VR content involving motion of how to satisfy these ranges of people’s experiences. If you tone it down, so no one gets sick at all, then you have an experience that is not very exciting for many users. Many videogames allow the user to control the motion. For a VR system, you can put in dampening mechanisms on the motion—a kind of brakes for certain types of acceleration or rotation, so that every person can use it according to what makes them comfortable.

“That goes back to the topic of personalization—you navigate yourself through a VR environment. If you start to feel queasy, you push a button indicating you are feeling queasy, and the system remembers that and tries to avoid it in the future.”

 Another approach, he adds, involves what Daly calls “bio-instrumentation” tools to allow for monitoring a user’s physiological responses. Such techniques have the potential to go beyond simply measuring a user’s discomfort, he adds. They can also assist developers in figuring out the overall quality of a VR experience in terms of how a user physically and emotionally reacts to it.

 “Galvanic skin response [GSR] would be one example,” he says. “That’s where you monitor [electrodermal activity] in the skin for changes that would relate to your emotional state. Heart rate, of course, would be another. The idea is to catch physical reactions that might indicate nausea at a very early stage, maybe even before the subject is aware of it.”

Daly emphasizes that a host of other nuanced technical issues must be resolved before VR experiences can genuinely be considered anywhere close to the real-world immersive experience to which humans are naturally accustomed. Among them is spatial distortions in 360-degree video, depending on the movement of the subject using the system. Perception and comprehension of 360-degree imagery, he suggests, can be altered by a subject’s head movements, potentially interfering with the desired impact.

“In many systems, the center of projection won’t [match up with] your head movements, even the smallest head movements,” Daly relates. “So that tends to make you see a world where there is something of an unstable VR environment—some people might say it looks rubbery or not as solid as it feels like it should.

“There is an issue with mapping from a sphere to 2D in VR since the displays are overwhelmingly 2D displays right now. A recent 2020 paper aimed to quantify this problem. They examine a few different ways in which projection mapping might be achieved. There are the equirectangular approach and the rectilinear approach, each with different types of distortions. They used a longstanding metric to visualize these distortions called the Tissot Indicatrix, [which is historically used to show deformations on map projections]. It essentially plots circles from one domain onto another domain, and they become a series of [ellipses] that allow you to see localized distortion. Projectionally, lines are curved, but in the real world, as you rotate your head at any one point in time, they are all straight. That is a key issue with VR—not so much the rendering, but the tolerance. So this is some of the early work to try and figure out the perception of these spatial distortions.”

In any case, all of this research, and all of these challenges, highlight the issue of whether human beings even want or need to enjoy or have the potential to enjoy or benefit from such immersive experiences. This is where the issue of habituation comes in, according to Daly.

“Habituation is our relationship to technology that changes over time,” he explains. For example, he continues, older films feature effects and stunts that no one of the eras in which they were made found crude or fake or simplistic, but which today seem glaringly crude and distracting to audiences who have modern media experiences lodged in their minds.

“Basically, our habituated sensitivity to more realistic imagery makes us more demanding customers as we get used to better and better quality,” he adds. “These are issues that, originally, were considered more aesthetic or awareness issues, but which have now become physiological issues.”

And then, of course, there is the issue of the creative intent of the creators who develop immersive experiences and how “real” such content can be, or should be, in the context of achieving their artistic goals. Daly suggests that pioneering creators of such material are on a continuous learning curve to try and figure out what they can do with this new medium to engage audiences fruitfully.

“A number of directors have commented that with VR, they can’t utilize all the basic cinematography techniques they have used in their craft for a long time,” he says. “So VR content creators are essentially conducting informal psychological experiments over time with their audiences as subjects, and they get feedback from the success or failure of those attempts, and try again. For example, in regular cinema, they have long played with objective and subjective points of view. How well they can do that with immersive media, and whether it even makes any sense to do it, is another question. All of those questions with VR are yet to be determined.”

 

Tag(s): Featured , News , Newswatch

Michael Goldman

Related Posts