Calamos Supports Greece
GreekReporter.comScienceAI Can Now Decode Speech From Brain Activity

AI Can Now Decode Speech From Brain Activity

AI Can Now Decode Speech From Brain Activity
AI can now decode speech from brain activity. Credit Mike de Waal/Twitter

With astonishing but still imperfect accuracy, artificial intelligence (AI) can decode words and sentences from brain activity.

AI makes an estimate as to what a person has heard based on a few seconds worth of brain activity data. In a preliminary investigation, researchers discovered that it lists the right response among its top ten options 73 percent of the time. 

Giovanni Di Liberto, a computer scientist at Trinity College Dublin said the AI’s “performance was above what many people thought was possible at this stage.”

The AI, created by Meta, the parent company of Facebook, may one day assist those who are unable to communicate verbally or through body language.

The majority of currently available technologies for these people necessitates dangerous brain procedures in the installation of electrodes.

Neuroscientist and Meta AI researcher Jean-Rémi King, who works at the École Normale Supérieure in Paris says this new approach “could provide a viable path to help patients with communication deficits without the use of invasive methods.”

On the basis of fifty-six thousand hours of speech recordings in fifty-three languages, King and his colleagues trained a computational tool to recognize words and sentences. The device, usually referred to as a language model, developed the ability to recognize particular linguistic elements at both a fine-grained and a more general level from letters and syllables to elements at the word and sentence level.

AI’s 73 Percent Accurate Response Rate  

The accurate response was in the AI’s top ten guesses up to seventy-three percent of the time, the researchers discovered when using magnetoencephalography, or MEG. When using electroencephalography, that percentage was no higher than thirty percent, Di Liberto asserts, but he is less upbeat about its actual uses.

He cites MEG’s need for large, high-priced equipment as the cause. It will take technological breakthroughs to reduce the cost and complexity of the equipment so that it can be utilized in clinics.

According to Jonathan Brennan, a linguist at the University of Michigan in Ann Arbor, it’s also critical to comprehend what “decoding” in this study actually entails. The method of extracting information directly from a source—in this case, speech from brain activity—is often referred to by this term. But the AI could only do this because it was given a limited number of potential correct answers from which to choose when making its assumptions.

Brennan says, “With language, that’s not going to cut it if we want to scale to practical use, because language is infinite.”

The group used databases from four institutions, which contained brain activity from 169 participants, to train an AI using this language model. Participants in these databases listened to various passages and stories while having their brains scanned using either magnetoencephalography or electroencephalography. These passages and stories included passages from Lewis Carroll’s Alice in Wonderland and Ernest Hemingway’s The Old Man and the Sea. The methods assess the electrical or magnetic component of brain impulses.

Scientists next attempted to analyze what participants had heard using just three seconds of brain activity data from each participant and a computational method that helps account for physical variances among actual brains. The scientists gave the AI instructions to match speech sounds from the tale recordings to brain activity patterns that the AI calculated would match what people were hearing. Then, based on more than one thousand scenarios, it predicted what the listener would have been hearing during that brief period.

Additionally, according to Di Liberto, the AI decoded data from subjects who were passively listening to audio, something which is not immediately applicable to patients who are nonverbal. Scientists must determine how to decipher what patients are trying to say from brain activity. For the AI to be a useful communication tool, indications of hunger or discomfort or a simple “yes” or “no” must be identified. 

See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!



Related Posts