Finding the right Gestures for an Interface
It often seems that scarcely a week goes by without word of a minor or major breakthrough in gesture control of computer systems, speech recognition, or speech synthesis systems. All seem to be vectoring in on the ability to control computers entirely hands-off.
This prospect may seem exhilarating to some, and scary to others, yet there is no doubt that wheels are turning ever more swiftly towards that point.
Several problems and some unexpected solutions still dot the landscape. One of the largest as-yet-unresolved problems is that there exists a large subset of users who live in fear of their computer systems, terrified that they may do something wrong and break it.
One of the advantages to a physical-interaction only system is that you have to touch something in order to get any use out of it. The moment you whip your hands away, startled, from a keyboard, you are no-longer interacting with it. Whip your hands away, startled, from a gesture recognition system, and it will interpret that gesture as a command sequence of some type, further adding to the problem.
This issue may well slow the development of gesture recognition for everyday tasks. Although, perhaps not as much as it might at first seem; especially if we can determine what the most likely to be used gestures will be, for the majority of users, before they even touch the system.
Enter researcher Wim Fikkert of the Centre for Telematics and Information Technology of the University of Twente in the Netherlands. He hit upon a novel idea: Let people decide what gestures they were going to assign to what commands, then aggregate and study the data.
Remarkably, most test subjects chose the same gestures of their own accord. The users chose a separate gesture for each command, and also stuck to their choices. Another striking finding was that the users explicitly changed the shape of their hands at the beginning of a gesture and relaxed at the end of it.
This data might well be the sort of thing we need, in order to avoid such problems as the 'gasp' gesture or 'what have I done?!?' shaking of hands.
Fikkert used various different test designs for his experiments. In the simplest test the subjects thought they were controlling a computer with their gestures, when in fact someone else was doing so.
In the most advanced test the subjects themselves interacted with a 4 x 1.5 metre screen at the University of Twente's Smart Experience Laboratory. They operated the system using wireless lasers on the backs of their hands for pointing and small buttons on their fingers for giving commands.