A woman with severe paralysis has regained the ability to communicate using an avatar, thanks to technology translating her brain signals into speech and facial expressions.
This development offers potential benefits for those who have lost their ability to speak due to conditions such as strokes or certain forms of sclerosis. Findings of this groundbreaking research were published in the journal Nature.
Previously, such individuals relied on slow speech synthesizers controlled through eye tracking or limited facial movements, which left no opportunity for natural communication.
Novel brain implant helps paralyzed woman speak using a digital avatar
In this AI era, the medical industry will boom fast and this is just a start pic.twitter.com/av8n2JORp7
— SmartBarani (@SmartBarani) August 26, 2023
The new technology employs small electrodes implanted on the brain’s surface to detect electrical activity in speech and facial control areas. These signals are then translated into the avatar’s speech and facial expressions, including smiles, frowns, and reactions of surprise. According to scientists, the goal was to restore a full-fledged way of communication.
Details of the Procedure
The patient was a 47-year-old woman who had been experiencing paralysis for more than eighteen years following a brainstem stroke. During this time, she has lacked the ability to speak or type, depending instead on technology that tracks her movements. It allowed her to select letters at a rate of up to fourteen words per minute.
To achieve this breakthrough, the research team implanted a thin rectangle with 253 electrodes on Ann’s brain surface, specifically in the speech-related region. These electrodes intercepted brain signals that would typically control tongue, jaw, larynx, and facial muscles.
After the electrodes were implanted, Ann cooperated closely with the team to instruct an AI algorithm in identifying her individual brain signals associated with various speech sounds, done by repeating specific phrases. The computer successfully grasped thirty-nine distinct sounds, and a language model similar to Chat GPT was employed to convert these signals into meaningful sentences. The avatar’s voice was then customized to replicate Ann’s pre-injury voice. A recording of her speaking at her wedding served as the reference point.
However, the new technology has a wide range of development. It showed a relatively high twenty-eight percentage error rate in decoding words in a test with over five hundred phrases, and a speech-to-text rate of seventy-eight words per minute compared to the typical 110 to 150 words spoken in natural conversation. Nevertheless, scientists believe recent improvements broaden horizons for this technology, which helps such patients in need.
Professor Nick Ramsey from the University of Utrecht in the Netherlands called the technology a significant leap from previous results, emphasizing the turning point for the scientific community. As for the next big stage, it involves developing a wireless version of the brain-computer interface (BCI) that can be implanted under the skull.
According to the co-author of the study, Dr. David Moses, empowering people to independently control their computers and phones with this technology could have significant impacts on both their autonomy and social interactions.