From Neuroactivity to Speech: A Breakthrough in Neurotechnology

A tech clairvoyant has created a milestone invention by translating the brainwaves of a man suffering from anarthria into speech.

Reading Time: 4 minutes

After 16 years of silence, a 36-year-old man communicated his first sentence. Most of us are fortunate enough to be able to easily communicate with others through speech. However, approximately 18.5 million people suffer from speech disorders and thousands of people are unable to speak due to paralysis, cerebral palsy, or apraxia. Speaking is a critical communication method that has been used since our ancestors were alive 70,000 years ago, initially in the form of grunts and nature calls in order to survive; being unable to speak is undeniably life altering.

To combat this issue, a novel neuroprosthetic device able to convert the neurotic signals of a paralyzed man into complete sentences was developed in 2021 by researchers at UC San Francisco. This man, referred to as patient BRAVO-1, had suffered a severe brain stem stroke at the age of 20, causing paralysis and face anarthria (the inability to articulate speech). He began to use a pointer attached to a baseball cap on his head to type out letters on a computer in order to vocalize his thoughts. Sixteen years later, he became a participant in the study “Brain-Computer Interface Restoration of Arm and Voice” (BRAVO). UCSF Chair of Neurological Surgery and co-director of the Center for Neural Engineering Edward Chang led his research group through the development of a brainwave-converting neuroprosthetic. Brainwaves are essentially electrical impulses sent through neurons that communicate various messages throughout the brain. First, the researchers mapped out the region of the brain controlling the vocal tract: the sensorimotor cortex, a region located in the upper-middle section of the brain responsible for processing somatic sensations such as temperature, taste, and smell. Next, they developed a neural code for the English alphabet that paired each letter with all the articulations needed to make the appropriate sound, such as the tongue’s motion and the mouth’s movement, into Algorithm One. They aimed to use this code to decode the patient’s speech-linked brain activity.

To start the experiment, they created Algorithm Two, which recognized the articulatory gestures used to verbally express 50 different words. Then, they placed a neural implant composed of 128 electrodes—electric conductors designed to carry electricity to nonmetallic matter—over the sensorimotor cortex. Next, BRAVO-1 was asked to attempt to speak in order to answer certain questions, using only the 50 words that Algorithm Two knew. The implanted device recorded his brain activity throughout the process. 

The device works through a three-step system. First, a separate algorithm detects the brain signals to determine whether the patient is attempting to speak. Then, a second word classification algorithm predicts the probability of the patient attempting to use each of the 50 available words by analyzing the articulation signals in BRAVO-1’s sensorimotor cortex. Thanks to Algorithm One, each alphabet’s sound is paired with the oral movements needed to make them, which are linked to the articulation signals sent by the brain to make those oral movements. Algorithm Two uses Algorithm One to connect each of the 50 words with the articulation gestures needed to make them. Thus, this word classification algorithm relies heavily on Algorithm Two. This was the most complicated process since each brain signal is often associated with multiple words and sounds. The third algorithm determines the probabilities for the next words based on the previous ones, following the common English linguistic and semantic structure.

Over 9,800 trials of the experiment were conducted. Results from the thousands of trials concluded that the average accuracy rate was 74 percent, functioning at a speed of 15 words decoded per minute. Therefore, it is evident that this system, while groundbreaking, has significant limitations. The extremely slow speed and limited 50 word count makes it unviable for everyday use, given that the average human speaking speed is 150 words per minute and the average adult vocabulary consists of 27,500 words. In order to accommodate more words, the neural code algorithm must undergo a time-consuming process to be updated.  Furthermore, the system is not portable, and the accuracy needs to be improved for efficiency in day-to-day life as the neuroprosthetic is currently flawed since it  associates one brainwave with the wrong articulation gesture and therefore communicates the wrong word. 

Postdoctoral researcher David Moses, who works with Chang in his lab, stated, “Now that we even have this initial proof of concept, and this first shred of evidence that this is feasible, it’s really quite motivating to see how far we can go with it.” The success of this new technology will open up more opportunities for improvement to assist those with different neuro-disabilities. However, it still has many flaws as of now. For one, it’s prone to interpreting the wrong word from brain signals and is not ready for wide scale manufacturing. Furthermore, even after improving the accuracy, such technology would undoubtedly be extremely expensive and unaffordable for the average person. In addition, it was created in America and therefore only works with the English language; many additional algorithms would have to be created for it to work for other languages. Still, this neuroprosthetic is an important breakthrough in the field of neurotechnology. Despite current setbacks, it has the potential to make speech communication easier for many in the distant future.