Finger Movement Patterns for Complex Tasks Anticipate the Position for Task after Next
We are not yet at the stage for most virtual environments, where the accurate rendering of individual fingers of the hands is necessary, and when we do use such detailed rendering, it tends to be motion capture based, and the detailed finger movements are supplied by actors moving their physical bodies.
Nevertheless, we will eventually reach a point where it is common in virtual spaces, both passive and interactive, where we will be animating the hands and the fingers using sequence files or real-time kinematics, and the more knowledge we have of how the fingers dance and flow in natural tasks, the easier it will be for those tasked with creating the process driving the fingers, to be able to produce realistic, believable finger movements.
As the study we're talking about here demonstrates, finger movement is not so cut and dry as it first appears. The way fingers move, for the exact same procedures, changes based on what the mind behind them plans for those fingers to do next the digits are moved into slightly different positions even for the same tasks, so as to make it easier to flow into the next task. Chances are this effect is common throughout the body, but at the time of writing, the fingers are the one non-speech-related area where it has been verified.
This work came about thanks to the work of researchers at the department of neuroscience, University of Minnesota. There is a process in speech known as coarticulation. This is the name for the phenomenon where the sounds a person makes whilst speaking, are subtly altered based on the sounds that are coming next in the word or sentence being spoken. It is such a common part of spoken language that we all do this all the time, and scarcely stop to think about it.
Martha Flanders, together with a team of three other researchers, were studying American Sign Language, and examining previous work that showed this process extended to the fingers when they were involved in communicating language each gesture was slightly altered in anticipation of where the fingers needed to be for the sign after the current one.
The question they then asked was, whether this finger anticipatory realignment was taking place in all complex tasks involving the fingers. That rather than having a set movement for a set task as we do in VR with sequence files the movement for each task changes subtly, depending on what the mind controlling those fingers anticipates they are going to be doing next.
In order to test their suspicions, the research team focussed on piano playing another sound-based task involving lots of complex finger movement. If the anticipatory differences were present there, then they would be present in a vast swath of other tasks, including handling coins, fingering food, or using a computer interface. Any multi-stage task involving the fingers, basically.
To conduct the experiments, ten trained pianists were sought. Because they were trained and accustomed to playing, their fingers would be moving subconsciously, and swiftly. A novice player would have to think about each key press with their conscious mind, and would not be thinking five or ten keys ahead.
The pianists were set the same task each: to play a specific selection of pieces with their right hands, and to maintain a uniform tempo. 14 different excerpts from 11 musical pieces were chosen in total. Whilst they were doing this, their right hands would be outfitted with a haptic dataglove capable of monitoring their precise muscle movements and transferring them to a computer for analysis.
The dataglove rig used had seven data channels, detecting the electromyographic or EMG data the pianists hands produced:
A flexor when it is used in this context, ie a type of sensor, is basically a specialist sensor that detects the amount of bend at the joint it is placed. So from these seven flexors, the dataglove rig was basically monitoring the degree and direction of flex in the fingers of that hand. By keeping the number of sensors down, the researchers basically sought to detect slight differences in muscle movement, without placing undue weight on the pianist's hand. The resulting rig may not have been the most accurate it could have been, but this was more than sufficient to detect the presence of differently shifting muscles whilst completing the same motion the prey the researchers were looking for.
With this rig, they hit pay dirt. Even when playing the same notes, there were indeed subtle differences which depended entirely on what the proceeding note or the subsequent note was. When shifting from a proceeding note, the fingers of the pianists all shifted in similar ways from the same note A to note B. However, if note A was replaced with a different note, D say, note B was played with the muscles in a slightly different position, as they moved from the first note to the second.
Likewise, the finger position note B was struck with, was further altered by whether note B was followed by note C or note E. As the researcher's paper puts it:
In other words, the whole concept of a standard seq file goes right out the window.
When working with standard sequence files, there is a technique known as animation blending. It is used to blend two sequences together. Rather than the first sequence stopping, and the second sequence starting as independent actions, the end of the first sequence is altered halfway towards the original start of the second, whilst the start of the second is altered halfway towards the original end of the first. Thus they seem to flow seamlessly from one into the next, with the naked eye unable to determine why one ends and the other begins.
That's what was expected with natural movements. However, for fingers certainly, that is absolutely not what is happening.
Rather, if you take three sequences, A, B, and C. Sequence B's entire sequence is subtly altered depending on where the fingers start off at the end of A, and where they need to be at the start of C. B is effectively using the sequence file as a guideline only, straying from it wherever it needs to in order to get the fingers in the right position for C.
That is far more computationally expensive to replicate virtually than the sequence files we are accustomed to. No formal study has been carried out on motion capture data sets to see if the same process is going on there; each action the natural body supplying the data takes, being directly changed by which actions the body did before, and which actions the controlling mind expects to do next; but it is very likely there.
This is basically a whole new direction to take in dynamically animating our avatars for utterly realistic movement, other than the direction we've been taking so far.
Hopefully, now that we are aware of it, by the time we are ready to start animating fingers and other swift-moving, complex flowing actions in interactive virtual environments, viable ways of making slight tweaks to a seq file, deviating from the seq's own programming based on its order in the queue, and the demands of the seqs before and after it, will have been implemented.