Roleplay Avatars with EmotionOn the 29th September 2006, researchers at Bournemouth University, UK, publicised the creation of a 'super emoticon' system for online chat. From the way the research was presented, it was obvious this was aimed at instant messaging and chatroom use - casual chat in other words.
However, many other communications systems are text-based, and could also benefit, albeit without actual photographs.
Text-based MUDs (Multi User Domain), MUSHes (Multi-User Shared Hallucinations) and many graphical MUVEs (Multi-User Virtual Environments) could also stand to benefit from such technology.
The problem is, of course, in these environments, some physical reality based photo, is usually the last thing you desire to see. Seeing a pimply geek smiling at you, when according to the roleplaying MUD text, you should be seeing "sir DarkBringer, Knight of Evil" is not exactly conductive to an immersive environment.
Perhaps even worse, such technology may make people who have low self-esteem about their personal appearance in the first place, feel less inclined to use non-RP systems, if everyone else is using smiling, or scowling photographs as emotion indicators.
Fortunately, the technology Bournemouth use, is easily adaptable to other purposes.
According to the press blurb for this new technology, a user first is to first upload a picture of their face with a neutral, or blank expression. The next stage is to mark out the key points on the face with their mouse - something akin to drawing an image map. The key points to mark out, are the ends of their eyebrows, the corners of their mouth and the edges of their eyes and lips.
This is because the technology builds on facial morphing. Facial morphing is a relatively old technology, where you mark the key points of two faces - the eyebrows, nose, edges of the mouth, ears and chin - then use a computer to seamlessly blend the two images together using time-lapse so one slowly morphs into the other.
This new system is similar, but instead of morphing one face into another, it uses a facial image database to determine the correct spatial relationship between different facial features for different emotions.
It distorts the points and areas marked by the user to shift them for a desired emotion, so for a surprised face, it would take the neutral expression, and morph the image - opening the mouth, widening the eyes, lifting the eyebrows. Not perfect, perhaps, but enough that people can recognise the emotion.
The key to adapting this technology, is that the photos uploaded, do not have to be photos. Because it uses a 'facial image database' to morph the images, that means all it is expecting is a vaguely humanoid face. Providing a CGI (Computer Graphics Image) rendered face from a 3D model is presented straight on to the viewpoint, and screenshotted, it can be submitted in lieu of a photo face. This allows use of facial expression, with whatever face you might desire. Smiling elves, grimacing ogres, scared kender, and glowering knights of Evil. All are equally as possible as a normal face.
The imitation of course, is that the image has to be human, or humanoid. If you try to send a dragon's face for example, through the process, the result is not going to move even remotely naturally, and is likely to look very, very, very weird indeed.