The Face is Not Enough to Convey Emotion
An interesting discovery has come out of a study by researchers at the Hebrew University of Jerusalem and at New York University and Princeton University. Namely, the discovery that the face is not the primary communicator of emotion. The rest of the body handles that. This is of course critical for our virtual environments and their avatars.
In a study published in the journal Science, the researchers present data showing that viewers in test groups were baffled when shown photographs of people who were undergoing real-life, highly intense positive and negative experiences. When the viewers were asked to judge the emotional valences of the faces they were shown (that is, the positivity or negativity of the faces), their guesses fell within the realm of chance.
In setting out to test the perception of highly intense faces, the researchers presented test groups with photos of dozens of highly intense facial expressions in a variety of real-life emotional situations. For example, in one study they compared emotional expressions of professional tennis players winning or losing a point. These pictures are ideal because the stakes in such games are extremely high from an economic and prestige perspective.
To pinpoint how people recognize such images, Aviezer and his colleagues showed different versions of the pictures to three groups of participants: 1) the full picture with the face and body; 2) the body with the face removed; and 3) the face with the body removed. Remarkably, participants could easily tell apart the losers from winners when they rated the full picture or the body alone, but they were at chance level when rating the face alone.
Ironically, the participants who viewed the full image (face and body) were convinced that it was the face that revealed the emotional impact, not the body. The authors named this effect "illusory valence," reflecting the fact that participants said they saw clear valence (that is, either positive or negative emotion) in what was objectively a non-diagnostic face.
In an additional study, Aviezer and his collaborators asked viewers to examine a more broad range of real-life intense faces. These included intense positive situations, such as joy (seeing one's house after a lavish makeover), pleasure (experiencing an orgasm), and victory (winning a critical tennis point), as well as negative situations, such as grief (reacting at a funeral), pain (undergoing a nipple/naval piercing), and defeat (losing a critical tennis point).
Again, viewers were unable to tell apart the faces occurring in positive vs. negative situations. To further demonstrate how ambiguous these intense faces are, the researchers "planted" faces on bodies expressing positive or negative emotion. Sure enough, the emotional valence of the same face on different bodies was determined by the body, flipping from positive to negative depending on the body with which they appeared.
"These results show that when emotions become extremely intense, the difference between positive and negative facial expression blurs," says Aviezer. "The findings, challenge classic behavioral models in neuroscience, social psychology and economics, in which the distinct poles of positive and negative valence do not converge."
For us, the findings demonstrate clearly that current attempts to increase emotional investment by having the facial expression of the avatar mimic that of the person speaking, are not going to be capable of giving as complete a picture of the individual's emotional state as had been hoped. A seq is still required to represent the bodily emotional stance, and if anything this only increases the importance of motion capture or motion translation techniques for a full level of immersion to be achieved when interacting with other users.