The Vocal Joystick is a hardware interface for those with severe disabilities
such as motor impairments. Provided they can make sounds with their larynx, even
if they are not words, the user can navigate a virtual environment, or web page.
The hope is that this interface will eventually be used to provide access to
other devices, controlling prosthetic body parts, or home robots.
The system is being produced by the University of Washington, and as of yet,
is not a commercial product. However, the progress so far, has been impressive,
with the prototype units able to navigate web spaces, and even play simple computer
Use of voice in virtual worlds, finally for activities other than talking.
The system consists of the same basic components as voice recognition:
Acoustic signal processing
The difference is, since it is not trying to identify language, but merely
pitch, intensity and sharpness of sound, it is less processor-intensive, and
can perform tasks the mouse can perform, but which ordinary speech cannot.
Statistical learning techniques such as neural networks are used so that the
vocal joystick learns and adapts to the vocal patterns of the user, over time.
Software Driver similar to speech processor, including: