In groundbreaking research, a team at the University of California used artificial intelligence implants in a patient, enabling her to “talk.” This medical marvel helped Ann Johnson, a 48-year-old woman who couldn’t speak due to the brain stem stroke she suffered in 2005. After almost two decades, Ann was able to regain her voice, all because of artificial intelligence.
Ann is currently helping researchers at UC Berkeley and UC San Francisco create new brain-computer technologies that may one day enable individuals to speak more naturally with a digital avatar that mimics a person.
This latest medical breakthrough was published in Nature on August 23, 2023. Edward Chang, who is a member of the UCSF Weill Institute for Neurosciences and has worked on the technology, said:
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others. These advancements bring us much closer to making this a real solution for patients.”
How did artificial intelligence help the patient regain voice?
Scientists used a recording of Ann speaking during her wedding as a sample to recreate her vocal inflections and tone for the most genuine communication possible.
To achieve this, the team surgically inserted a paper-thin rectangle of 253 electrodes on her brain's surface over regions they had previously identified as crucial for speech. The electrodes blocked the brain signals that, in the absence of the stroke, would have reached Ann's face, lips, tongue, jaw, and laryngeal muscles. The electrodes were then linked to a bank of computers by a cable that was hooked into a connector fastened to Ann's skull.
Ann and the researchers spent weeks programming the artificial intelligence algorithms of the device to recognize her particular brain signals for speaking. Repeating various phrases from a repertoire of 1,024 words was necessary to train the computer to identify the brain activity patterns connected to the sounds.
The team used software created by Speech Graphics, a business that produces AI-driven facial animation, to replicate and animate the facial muscles in Ann's avatar. The researchers developed specialized machine-learning algorithms that enabled the company's software to sync with signals emanating from Ann's brain as she attempted to speak and translate them into the movements on her avatar's face.
The company is now developing a wireless version so that the user won't need to be plugged into a computer.
Johnson recovered her capacity to make small movements and elicit facial emotions after years of therapy. She progressed from utilizing a feeding tube to self-feeding on soft or minced foods. And now, thanks to artificial intelligence, she can even “talk.” It is still unknown when hospitals and medical research organizations will start using the technology more frequently.