Untitled Document
Not a member yet? Register for full benefits!

 A Prosthesis for Speech

This story is from the category Sensors
Printer Friendly Version
Email to a Friend (currently Down)



Date posted: 07/07/2008

Researchers at Boston University are developing brain-reading computer software that in essence translates thoughts into speech. The researchers are presenting their work at the annual Acoustical Society of America meeting in Paris this week.

The team scanned the brain of a paralyzed patient and found that, within the motor region of the brain involved in speech, certain areas light up (orange) according to various sounds that the patient mentally voices.

"The question is, can we get enough information out that produces intelligible speech?" asks Philip Kennedy of Neural Signals, a brain-computer interface developer based in Atlanta. "I think there's a fair shot at this at this point."

The software is designed to translate neural activity into what are known as formant frequencies, the resonant frequencies of the vocal tract.

So far, Guenther and Kennedy have programmed the synthesizer to play back sounds within 50 milliseconds of them being thought. This audio playback feature has allowed their test-patient, 16 year old Erik Ramsey, who had his spinal cord severed in an accident 8 years ago, to practice vowel utterance. He first thinks of a vowel, then listens to the audio response, and adjusts how he thinks ofthe letter, to improve playback.

Jonathan Brumberg, a PhD student in Guenther's lab, says that while each trial has been slow-going - it takes great effort on Ramsey's part - the results have been promising. "At this point, he can do these vowel sounds pretty well," says Brumberg. "We're now fairly confident the same can be accomplished with consonants."

Brumberg says that the team may need to implant more electrodes, in areas solely devoted to the tongue, lips, or mouth, to get an accurate picture of more-complex sounds such as consonants.

"The electrode is only capturing about 56 distinct neural signals," says Brumberg. "But you have to think: there are billions of cells in the brain with trillions of connections, and we are only sampling a very small portion of what is there."

Guenther is also exploring non-invasive methods of studying speech production in normal volunteers. He and Brumberg are scanning the brains of normal speakers using functional magnetic resonance imaging (fMRI). As volunteers perform various tasks, such as naming pictures and mentally repeating various sounds and words, active brain areas light up in response.

See the full Story via external site: www.technologyreview.com

Most recent stories in this category (Sensors):

28/02/2017: DJI drones use plane avoidance tech

19/02/2017: Ford developing pothole alert system for drivers

08/02/2017: Pioneering chip extends sensors’ battery life

04/02/2017: Sensor Networks for Rangeland Animals

04/02/2017: Cardiff Uni bid to create osteoarthritis 'smart patch'

31/01/2017: Efficient time synchronization of sensor networks by means of time series analysis

12/01/2017: Uber to share data to help ease city congestion

23/12/2016: Electronic 'hairy skin' could give robots a more human sense of touch