Researchers are working on perfecting a decoder that is capable of transforming the signals of the brain into words. Initial results have been promising, and a device of this kind could enable patients incapable of speaking as a result of a stroke or paralysis to find their voice again.
In what would be an incredible breakthrough for all patients incapable of speaking as a result of a stroke or paralysis, researchers are in the process of perfecting a decoder capable of transforming the signals of the brain into words, according to a study published on Wednesday 24 April.
They have invented a device that can reproduce synthetic words via a computer using signals from the brain which set off the corresponding movements in the mouth. The technique has been presented in the "Nature" trade journal but is still very much at the experimental stage, and is some way away from being fully implemented. Those behind the project hope however that one day, it will be of benefit for patients who have lost the power of speech.
Getting the words at the source
"Our long-term objective is to create a technique that can restore communication for patients who are unable to speak, either due to neurological problems such as strokes, or pathologies like certain cancers," said Edward Chang from the University of California at San Francisco, one of the authors of the study, to the AFP.
There are already devices which enable patients of this kind to construct words letter by letter using movements of the eyes or the head, but while systems like these improve quality of life, they are slow, producing a maximum of 10 words per minute as opposed to 150 in regular speech. This is why the researchers decided to go and get the words at the source – namely the brain.
Identifying the brain signals responsible for articulating words
They carried out an experiment with five patients undergoing treatment for epilepsy, part of which saw them have electrodes placed on their brains. The researchers first asked the patients to read certain predefined phrases out loud, with the aim being to identify the signals from the brain that are responsible for articulating words, using the electrodes.
Following that, they decoded these signals by associating them with the movements required for pronunciation – in the jaws, the tongue, the lips and the larynx. Lastly, based on these movements, they reproduced the phrases that had been spoken – all of them simple in construction and in English – by computer.
The audio files published by the scientists were astonishing. While the synthetic voice garbles certain words, others are clearly understandable and almost as good as the natural version of the phrases being studied. "Some of the brain signals linked to the movements of speech are common to all individuals," Chang explained, adding that he believes that "one day it will be possible for a decoder that is set up for a particular individual with the power of speech to be used for a patient who has lost that ability, and who will be able to control it using his or her own brain activity".
Even more astounding was the fact that the researchers asked a participant to mime words without pronouncing them, as if lip-synching to a song. While the results were less impressive that for the phrases that were actually spoken out loud, the scientists still said that it was possible to synthesise the words by computer.
Contact Allianz Partners
Dec 13, 2018
Making use of advanced medical imaging technologies, a world-renowned neuroscientist has just confirmed what he long suspected: that the lower part of the human brain contains a hithert [...]
Oct 28, 2017
A new method for analysing brain scans using artificial intelligence (AI) could make it possible to detect signs of Alzheimer’s years before the true onset of the disease. The techniq [...]