Brown University

14/08/2024 | Press release | Distributed by Public on 14/08/2024 22:19

Brain-computer interface allows man with ALS to ‘speak’ again

The study is part of the BrainGate clinical trial, directed by Dr. Leigh Hochberg, a critical care neurologist and a professor at Brown University's School of Engineering who is affiliated with the University's Carney Institute for Brain Science.

"Casey and our other BrainGate participants are truly extraordinary," Hochberg said. "They deserve tremendous credit for joining these early clinical trials. They do this not because they're hoping to gain any personal benefit, but to help us develop a system that will restore communication and mobility for other people with paralysis."

It is the latest in a series of advances in brain-computer interfaces made by the BrainGate consortium, which along with other work using BCIs has been developing systems for several years that enable people to generate text by decoding the user's intent. Last year, the consortium described how a brain-computer interface they developed enabled a clinical trial participant who lost the ability to speak to create text on a computer at rates that approach the speed of regular speech, just by thinking of saying the words.

"The field of brain computer interface has come remarkably far in both precision and speed," said John Ngai, director of the National Institutes of Health's Brain Research Through Advancing Innovative Neurotechnologies® Initiative (The BRAIN Initiative®), which funded earlier phases of the BrainGate consortium. "This latest development brings technology closer to helping people, 'locked in' by paralysis, regain their ability to communicate with friends and loved ones, and enjoy the best quality of life possible."

In July 2023, the team at UC Davis Health implanted the BCI device, consisting of four microelectrode arrays, into Harrell's left precentral gyrus, a brain region responsible for coordinating speech. These arrays record brain activity from 256 cortical electrodes and detect their attempt to move their muscles and talk.

"We are recording from the part of the brain that's trying to send these commands to the muscles," Stavisky said. "We are basically listening into that, and we're translating those patterns of brain activity into a phoneme - like a syllable or the unit of speech - and then the words they're trying to say."