Having been able to improve by analysing various sounds and the corresponding brain activity, a form of artificial intelligence is now capable of reproducing the sound of a number based on the electrical signals emitted by the brain, and do so with the kind of efficiency and clarity that allow it to be understood by humans. US researchers behind the algorithm are now looking to fine-tune it.
US researchers recently developed an algorithm which pushes back the boundaries that had previously been established in terms of transcription of brain activity. The form of artificial intelligence (AI) presented by these particular experts in a study published on 29 January in Scientific Reports can actually convert the electrical signals of brain activity into words, with the scientists primarily using deep learning to design their system.
Algorithm fed by data for comparison purposes
This method of deep learning enabled the algorithm to hone itself until it was finally capable of recognising specific words in the recordings of the activity of the brains of subjects when they were listening to texts being read out. The research involved analysing the auditory cortex of the participants using electrodes implemented during neurosurgical operations which were performed out of necessity on epileptic patients.
The comparison between the words that were heard and the signals detected in the brain could then be carried out and fed into the algorithm, according to Futura Sciences. Once the algorithm had sufficient data and information, the authors of the study had the subjects listen to recordings of men and women – different from those used during the deep learning phase – reading out numbers.
Humans evaluating the quality of the AI transcription
The artificial intelligence then had to reproduce the numbers in sound based on the brain activity. These "words" were read out to 11 participants, who had to evaluate the clarity and also say which numbers they thought they had heard as well as the gender of the person pronouncing them. The results proved to be particularly encouraging.
The success rate was 80% for the recognition of the gender of the speaker and 75% for the identification of the number, while the average quality of the transcription was estimated at 3.4 out of 5 by the participants. The numbers read out during the experiment were not part of the 30 minutes of texts read during the deep learning phase.
Future experiments on more complex sounds
The US researchers are now hoping to obtain similar results from longer and more complex phrases, and also to develop less invasive means of analysis of brain signals. The hope is ultimately to one day be able to allow people who have lost the power of speech to express themselves with sounds.
Contact Allianz Partners
Feb 14, 2019
One of the many potential applications of artificial intelligence in the field of medicine, AI could prove particularly useful in the early diagnosis of Alzheimer’s disease. Thanks to [...]
Dec 29, 2018
A predictive policing system could soon be used to identify potential crimes before they are even committed. This controversial tool, which would not have looked out of place in the fil [...]