SEMAINE, or Sustained Emotionally coloured Machine-human Interaction using Non-verbal Expression, is a research project attempting to claw its way up the rungs of the oh so very long ladder of creating a true artificial intelligence.
This project deals with the creation of an artificial intelligence which is capable of understanding non-vocal communication, both conscious and subconscious, from a human. Body language, face movements, expressions and twitches, all interpreted correctly, without having to rely on spoken commands.
*Note: For each organisation, a link to their website is provided, where known. If we have local information on the lab, company, or group then that information is also provided. This is standard layout on all Research Initiatives resources.
The aim of the SEMAINE project is to build a Sensitive Artificial Listener ? a multimodal dialogue system with the social interaction skills needed for a sustained conversation with a human user. The system will emphasise ?soft? communication skills, i.e. non-verbal, social and emotional perception, interaction and behaviour capabilities.
The Sensitive Artificial Listener paradigm involves only very limited verbal capabilities, but has been shown to be suited for prolonged human-machine interaction. In this paradigm, we will build a real-time, robust interactive system perceiving a human user's facial expression, gaze, and voice, and engaging with the user through an Embodied Conversational Agent's body, face and voice.
The agent will exhibit audiovisual listener feedback in real time while the user is speaking, and will take the user's feedback into account while the agent is speaking. The agent will pursue different dialogue strategies depending on the user's state; it will learn to interpret the user's non-verbal behaviour and adapt its own behaviour accordingly.
1st of April 2008, ongoing
The European Commission has awarded SEMAINE a grant of ?2.75 million