Not a member yet? Register for full benefits!

Username
Password
Project LifeLike

Project LifeLike, a collaboration between the Intelligent Systems Laboratory at the University of Central Florida and the Electronic Visualisation Laboratory at the University of Illinois at Chicago, is an attempt to create an avatar that completely supplants the physical form of the individual, for remote interaction both in virtual reality and physical space.

A long dreamed of possibility for those with significant physical issues that get in the way of equal opportunity in business ventures, or even in the case of the various dysmorphias, getting in the way of social interaction. Beyond that, it is an attempt to use avatars for interaction at a distance that goes far beyond what videoconferencing can offer. A videoconference is just a film stream, whereas an avatar body can be tracked and interacted with.

The EVL team, headed by Jason Leigh, an associate professor of computer science, is tasked with getting the visual aspects of the avatar just right. On the surface, this seems like a pretty straightforward task--anyone who has played a video game that features characters from movies or professional athletes is used to computer-generated images that look like real people.

But according to Leigh, it takes more than a good visual rendering to make an avatar truly seem like a human being. "Visual realism is tough," Leigh said in a recent interview. "Research shows that over 70% of communication is non-verbal," he said, and is dependent on subtle gestures, variations in a person's voice and other variables.

To get these non-verbal aspects right, the EVL team has to take precise 3-D measurements of the person that Project LifeLike seeks to copy, capturing the way their face moves and other body language so the program can replicate those fine details later.

In short, it pushes both mocap and facial recognition technologies to the limit, with the team developing new algorithms and refining old ones to maximise accurate data collection of even the finest movements.

The ISL team, headed by electrical engineering professor Avelino Gonzalez, focuses on applying artificial intelligence capabilities to the avatars. This includes technologies that allow computers to recognise and correctly understand natural language as it is being spoken, learning new words and phrases on the fly.

An eventual derivative goal is to have the avatars able to respond as the person would, with a similar level of linguistic competency. This would be ideal for low-level tasks, perhaps making appointments or small-talk - without the person actually taking part in the conversation. One possible use of this is to extend business hours, beyond the hours that person is actually present. No expectation is placed on the avatar being able to carry on that person's capabilities at a mid or high level.

One task the AI is being groomed for, is to recreate the personalities of historical figures, by building up a repertoire from actors giving base material to work with. The AI would then integrate all such actors into one personality, and then drive that personality in its own direction. This could then be used to give students the ability to engage in conversation with Albert Einstein, or Winston Churchill with a reasonable degree of both accuracy and believability. In these cases, once exposed to enough information to form its personality matrix, the avatar would continue on its own, perhaps chatting with a thousand students at once.

The end goal, Gonzalez says, is that a person conversing with the avatar will have the same level of comfort and interaction that they would have with an actual person. Gonzalez sees the aims of Project LifeLike as fundamental to the field of artificial intelligence.

"We have applied artificial intelligence in many ways, but if you're really going to implement it," Gonzalez said in a recent interview, "the only way to do it is to do it though some sort of embodiment of a human, and that's an avatar."

The Project LifeLike team demonstrated the technology this past winter at NSF's headquarters in Arlington, USA. The team gathered motion and visual information on a NSF staff member, and gave the avatar system information about an upcoming NSF proposal solicitation. Other people were able to sit and talk to the avatar, which could converse with the speaker and answer questions about the solicitation. Colleagues of the NSF staffer were instantly able to recognize who the avatar represented and commented that it captured some of the person's mannerisms.

References

The Next Best Thing to You

Project LifeLike?s realistic Avatars

Towards Lifelike Computer Interfaces that Learn

Staff Comments

 


.
Untitled Document .