"Computers are getting better at giving voice to the voiceless.
A paralyzed woman who hadn't spoken in almost 20 years found her voice again through a virtual avatar, thanks to a brain implant and mind-reading algorithm.
A different brain implant helped another woman who was robbed of her voice by a neurological disease to communicate via text at a speed closer to that of normal speech.
Their experiences, published in separate studies on Wednesday in the journal Nature, show significant advances for systems that allow people to control devices using brain signals. Such systems have reconstructed a Pink Floyd song from brain activity and translated brain signals for speech and handwriting into text. Brain-computer interfaces used in the new studies are faster and more sophisticated, researchers said.
"We've shown what is possible," said Dr. Eddie Chang, a neurosurgeon at the UC San Francisco Weill Institute for Neurosciences and co-author of one of the studies.
Chang and his colleagues implanted a paper-thin sheet of silicon, about the size of a credit card and dotted with 253 electrodes, onto the surface of a paralyzed woman's brain. The patient, 48, hasn't been able to speak or use her arms and legs since suffering a stroke in her brainstem about 18 years ago.
The brains of paralyzed people can emit electrical signals controlling movement even when channels of communication between the brain and muscles are broken. The woman's implant picked up signals meant to control speech-related muscles in her tongue, jaw, larynx and face, the study said. The electrodes were connected to computers via a cable plugged into a port attached to her skull.
Researchers trained an algorithm to recognize her brain signals for speech and facial expression. Over two weeks, she was shown words and sentences on a screen, which she was told to recite silently. She was also told to imagine making sad, happy and surprised facial expressions. Computers recorded her brain signals as she did these tasks.
Researchers tested whether the algorithm could accurately translate brain signals into text and speech. The patient was presented with new sentences and told to silently recite them.
For text, the system composed sentences at a rate of 78 words a minute: five times as fast as previous brain-computer interfaces used for communication, Chang said. Using a vocabulary set of about 1,000 words, the system was accurate about 75% of the time. The patient usually uses an assistive head-tracking device that types at about 14 words a minute, Chang said. People typically speak at about 150 words a minute.
"We see now that it's possible to create more natural, more embodied ways of communicating," Chang said.
The brain-computer interface allowed the woman to communicate using a speaking avatar, which Chang called a first. She chose a woman with hazel eyes and chin-length brown hair. Researchers used a recording of a speech she gave at her wedding to personalize the avatar's voice.
Peter Brunner, an associate professor of neurosurgery at the Washington University School of Medicine in St. Louis who wasn't involved in the studies, said more work is needed to improve such systems and make them available to more patients.
"One fundamental limitation is how invasive these surgeries are," he said. "How expensive and how practical will this be?"
In the other Nature study, Pat Bennett, 68, had four sensors, each about the size of a popcorn kernel and loaded with dozens of electrodes, implanted in her brain's outer layers. Bennett was diagnosed in 2012 with Lou Gehrig's disease. She can no longer speak intelligibly.
Stanford University researchers connected the sensors in Bennett's brain to computers trained to learn how her brain signals correspond to speech. After four months of twice-weekly training sessions, the system translated Bennett's brain signals into text at a rate of about 62 words a minute. Using a vocabulary set of about 125,000 words, the system was accurate 76% of the time.
"This could allow truly fluent conversation and a real restoration of the ability to connect with others," said Dr. Jaimie Henderson, a professor of neurosurgery at Stanford and a study co-author.
Henderson said the deeper sensors his team used could produce higher-definition results by reading signals from individual neurons. But because the brain moves around and scarring can happen around the sensors, the devices can shift and the brain-computer interface might need to be retrained." [1]
1. U.S. News: Implants Convert Brain Signals Into Text and Speech. Mosbergen, Dominique.
Wall Street Journal, Eastern edition; New York, N.Y. [New York, N.Y]. 24 Aug 2023: A.3.
Komentarų nėra:
Rašyti komentarą