Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Resources/Hosted/Resource.php on line 9
Instantaneous Visual Virtual Face
Not a member yet? Register for full benefits!

Username
Password
Instantaneous Visual Virtual Face

One of the major issues with any virtual form, is facial expression. Traditionally, getting a virtual face to match your physical intent for expression in real-time, was a lost cause. Even for big budget film making, CG overlays had to be constructed frame by frame by hand. Enormously time consuming, ludicrously expensive and completely useless for real-time usage.

Now however, we have an alternative. Pattern matching AI software, derived loosely from the machine vision algorithms used to identify faces in security CCTV has been created by computer scientist Barry-John Theobald at the University of East Anglia in the UK and Iain Matthews, formerly at Carnegie Mellon University and now at Weta Digital in Wellington, New Zealand.

Both were approached by psychologists from a number of universities, to see if it was possible to create a real-time overlay of a face such that the CG could look completely different if required, but at the same time perfectly mirror the original face in terms of expression and mannerisms. Even going so far as to sync to spoken words.

The need for the research from the psychologists' viewpoint was obvious. If it was possible to disentangle identity from physical appearance, it would open the floodgates for all manner of research on the nature of human to human communication. If a person's gender was completely flipped, yet still the same person, would others react to them any differently? If their race was flipped, would there be differences again?

Of course, such a technology would also be utterly invaluable to roleplayers, film producers, alternate life seekers and transitioning individuals.


Top Left: The original volunteer chats away.

Top Right: Her face is scanned and dissected into key areas

Bottom Left: The grid is used to analyse her facial movements in fine detail, and contort a CG male face to follow those movements exactly, in real-time

Bottom Right: A second volunteer, conversing with the male face in realtime.

Both facial researchers responded to the challenge, and in just a scant few years, have created a system to do precisely that. 'Mapping and manipulating facial expression', the paper derived from their work, has been accepted by the Journal of Language and Speech at time of writing.

To create the software, an AI neural net was trained to recognise facial features and emotional states, in work many other studies have replicated before, and will replicate again. Volunteers of both genders were recorded, performing 30 different facial expressions such as frowning, smiling and looking surprised. For each expression, the positions of key facial features, such as the eyes, nose and corners of the lips were tracked, as with any morphing software.

Once this was done, the footage was used to train the neural network into recognising each face, and following that face as it deformed for each expression. The system was not learning the expressions themselves, but was tracking precisely how the face deformed as different parts were moved in real-time into different positions. The large number of volunteers helped accelerate the learning process so that the software could begin to anticipate and react promptly, even to new faces.

Once the software was able to track exactly how any given face deformed in real-time, it was able to create any number of purely CG faces, that still looked realistic, and were divided up the same way. These faces could then be animated in perfect sync to the movements of an actual face being tracked. Every wrinkle, every fold, every muscle and smooth area of skin in the CGI model distorting exactly as it should, to precisely replicate both the expression and the mouth movements f the actual person in real-time.

The result?

A computer generated face that can be male or female, young or old, of any race, or even a non-human such as a troll, or anthromorphised animal. That face then moves exactly as the person's physical face does, in synced real-time. Whatever expression the person contorts their face into, the software knows the limitations of flesh to make the CG follow suit. The person's face becomes whatever the system needs it to be, whilst losing none of the nuance of facial expression, and capable of being used in real-time conferencing.

References

Dr. Barry-John Theobald

Retargeting Facial Expression and Visual Speech

Staff Comments

 


.
Untitled Document .

Warning: Unknown: write failed: Disk quota exceeded (122) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0