Creating Intelligent, Emotive NPCs
This is a Printer Friendly Article on the Virtual Worldlets Network.
Return to the web-view version.
Author Information
Article by Virtual Worldlets Network
Copyright 09/04/2008
Additional Information

One of the greatest issues facing a persistent, immersive, sensarilly absorbing virtual environment, is finding others to interact with, and having enough life present to make it a believable world. With a landscape that can be many hundreds of square miles across, you are not going to find other human-controlled avatars, save clustered together in small groups. Instead, there is silence.

After a while, no matter how beautiful the landscape, how awe-inspiring the view, the sheer bleakness of it all is going to get to you. If you are wandering alone, and have seen no other, engaging participants, then it all starts to feel like dressing for "apocalypse now".

In order for a world to truly come alive, or at least stave off the feelings of isolation, you need to see other signs of life within the environment. Other people or animals going about their daily life, or at least seeming to go about their daily life.

Fundamentally, its not about reality, its about the illusion of reality; creating the feel of something being real.

What we need, really, is a large number of artificial intelligence controlled non-player characters, that whilst not human, think, or appear to, and react emotively and believably to the actions of one another, and to you.

A massive undertaking, it is lessened very slightly because it is not the creation of a mind, but the creation of an emulation of a mind. Still, it is breathtakingly complex.

Never the less, much research is being directed this way, due to the obvious benefits of such a system - behaving like a human without being human, and with none of the rights of a human.

It is basically an advanced kind of virtual embodiment.

Embodiment is an approach to Artificial Intelligence (AI) development, which maintains the only way to create general intelligence like the human intellect is to use programs with 'bodies' in the physical world. In essence, robotic systems. Virtual embodiment does exactly the same, but restricts the body to a virtual world. The AI perceives their environment as real to them, regardless.

The first examples have already been produced.

A group of researchers from Rensselaer Polytechnic Institute have created a character, which is not human, entirely virtual, but possesses the capacity to have beliefs and to reason about the beliefs of others.

Unveiled earlier this year, the AI, Eddie exists as a 4-year-old child within the Bot-unfriendly virtual environment of Second Life. Eddie's brain is not so much designed as emergent, and he is capable of reasoning about his own beliefs, and draws conclusions in a manner, which matches human children his age.

"Current avatars in massively multiplayer online worlds - such as Second Life - are directly tethered to a user's keystrokes and only give the illusion of mentality," said Selmer Bringsjord, head of Rensselaer's Cognitive Science Department and leader of the research project. "Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things."

Such characters can only be engineered by coupling logic-based artificial intelligence and computational cognitive modelling techniques with the processing power of a supercomputer, according to Bringsjord.

The principles and techniques that humans deploy in order to understand, predict, and manipulate the behaviour of other humans is collectively referred to as a "theory of mind." Bringsjord's research group is now starting to engineer part of that theory, which would allow artificial agents to understand, predict, and manipulate the behaviour of other agents, in order to be genuine stand-ins for human beings or autonomous intellects in their own right.

Eddie's powers of reasoning were tested according to a psychological trial that human children can go through. A false-belief test.

In this test, a person enters the room with the child, carrying an object - in this case, a teddy bear. The bear is then placed inside a cabinet, and the person leaves.

A second person then enters the room, and takes the bear out of the cabinet, and places it in a different cabinet - a fridge in this case. That second person then leaves the room.

After all this, the child is asked where the first person will look, when they come back into the room to retrieve the bear.

The correct answer would be that they will look in the cabinet, because that is where they last saw it. However, four year olds haven't yet formed a theory of the mind of others, and will usually answer with 'the fridge'.

The researchers recreated the same situation in Second Life, using an automated theorem prover coupled with procedures for converting conversational English in Second Life into formal logic, the native language of the prover.

Eddie was then shown this test, without any pre-coding to make him biased or predisposed in the test - working purely on his mind's algorithms.

Eddie predicted the bear would be in the fridge, same as any four-year-old. However, load in a 'theory of mind' module, and re-run the test, and he accurately predicts the person will look in the cabinet.

Hurdles

The following are a selection of the hurdles research of this kind, will have to deal with:

The greatest hurdle with current virtual environments, is natural language processing: converting emoted actions and speech into logical arguments. This will only get worse as time goes on, and more and more body language and self-expression is added into the VRs we utilise.

Emulating a mind is not as hard as simulating a mind, but it still requires vast amounts of computational resources. Eddie runs on a Blue Gene/L supercomputer, just to vaguely emulate the psychological state of a four-year-old. That's for just one of these AI's. Currently, computer performance is only doubling every two years. At this rate, it will be decades before an Eddie in every home is possible.

If an AI 'emulation' finally is able to possess memories; believe things; desire things; and remember things, then are they not by definition, alive? If this is the case, should there be laws drafted to protect their rights?

Resources

Advanced Synthetic Characters/RASCALS Cognitive Architecture

Rensselaer Artificial Intelligence and Reasoning (RAIR) Laboratory

"Bringing Second Life To Life: Researchers Create Character With Reasoning Abilities of a Child"

False Belief Demo in Second Life (MOV, 15.1mb movie file)