Lessons from Simone: MoCap Filter
Simone is a seminal virtual reality film, albeit one which is unlikely to come about. You see, the technologies presented in Simone were developed in isolation, by one person/team. Whilst that is plausible for some of them, such as the pure-VR techniques, for others such as the photo manipulation techniques, this style of development is unlikely. Thus, everything in the film has to be taken with just a pinch of salt.
However, there are several aspects of both the technology of VR and the social impact, which the film carries off very well, and which deserve to stand on their own merits. The MoCap filter the film aludes to, is one such aspect.
Definitions from the VWN Dictionary:
The MoCap used in Simone
When you look at how the avatar of Simone is constructed, and how film executive Viktor Taransky (Al Pacino) controls her movements, it is clear that he is using a form of MoCap technology, and that all of Simone's movements are at least based on MoCap animation.
Yet, there is a subtle difference. Whereas Viktor's movements are not always feminine, Simone moves with a permanent feminine grace. It is an oddity that bears commentary.
Part of the issue is that Simone's body has a slightly different boning structure to Victor's, and that her joints co not move quite the same way as his. However, that is not the full story. Previous attempts to impart MoCap onto male and female avatars alike, have resulted in both avatars moving the exact same, identical way - a man moving like a woman, and a woman moving like a man, depending on who was animating the MoCap.
Clearly then, there is more to the interface than just the simplest explanation. There is what can best be described as a MoCap filter unit attached to the avatar herself, which has been programmed to filter movement appropriate for that avatar. It'll still allow the limbs to move in the same way as directed, and to the same positions, but moves with a certain feminine grace, adding just a little to the movements, to custom them to better fit that particular form.
In essence it functions like a procedurally blended animation, but using live motion capture as the source. The AI module is located on the user's computer system, and is not very complex at all - a simple tree based expert system is likely all that is required. The databases, are part of the avatar.
One of them, the database of limb movement limitations, is already part of many more advanced boned avatars, as it is used to mark the limitations in movement for each joint where bones meet. In addition, it lists the expected normal usage movement speed, ideal for testing when a movement is quick and clumsy, and perhaps needs a little filtering.
The second database is more a set of character constraints, little idiosyncrasies that the avatar has; ways she touches her hair, or how she lifts her elbow when lifting her arm, that kind of subtle thing. Even the way she stands at rest. In principle, it is little different than a modern avatar sequence list.
The problem is in the mocap, and working out which sequence - or several concurrent sequences - the user is going for at the time. This is why a small AI module is required, to analyse the positional input, and work out what is happening.
It is not that surprising really, and it is not a great stretch to picture each avatar having such a filter attached, in a large VR, in much the same way as avatars now, each have their own custom sequence file options. They don't prevent someone from moving in very weird ways, but they add little subtle movements to increase the fit if movement is not quite perfect.
It is not too great a leap for example, to imagine a slightly more sophisticated filter, adding proper leg movements to an avatar that are not really being supplied by a user with no natural legs, and so cannot really use motion capture to animate them properly.
On a more regular scale, it would allow a person without 'breeding' to correctly wear an upper class avatar, without embarrassing themselves unduly - the filter takes the rough edge off of movements, whilst the movements themselves, are entirely those of that individual.
It is of course quite likely that over considerable extended periods of immersion with a given avatar, that the person would subconsciously adapt their movements to match the filter, to a point where it was not really necessary, for that particular avatar. Then, when switching to another avatar, the filter for that one takes, over, subtly editing out all the little quirks that no-longer really fit.