Artificial Intelligence (AI) is in the news today as people find it inspiring, and a bit frightening. AI can write an article, create a piece of art, or a musical score, and now it can actually read your mind and translate it into text.
Researchers at The University of Texas, Austin have created a sematic decoder – new AI system – that can translate a person’s brain activity into an uninterrupted stream of text. This new decoder could be used to give a voice to people who are unable to speak, according to a press release from the university. This would greatly help people who have suffered debilitating strokes.
How does it work?
While there are other language decoding systems available, users have to go through invasive surgical implants. The semantic decoder is completely noninvasive and brain activity is measured through an fMRI scanner.
The participants of the study received extensive training where the people listened to hours of podcasts in the scanner, and then had their thoughts decoded while listening to a new story or if they imagined a story.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Alex Huth, an assistant professor of neuroscience and computer science at UT Austin said in the press release. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
Decoding is not a word-for-word transcript
The system works on a model that is similar to the way Open AI’s ChatGPT and Google’s Bard work but the decoder does not produce an exact transcript of the brain activity, but rather gives the gist of what is being thought, reported SciTech Daily.
In one experiment, the study participant listened to a speaker say, “I don’t have my driver’s license yet,” and his or her thoughts could be translated to say that the speaker didn’t even know how to drive yet.
While the process is effective, it cannot be readily used because it relies on the use of fMRI machines. The technology could be transferred to a more portable system like a functional near-infrared spectroscopy (fNIRS).
“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” Huth told SciTech Daily.
Using AI Technology for Good
While there is fear that this type of AI technology could become “Big Brother” and used to spy on people’s thoughts, the researchers insist that the system cannot be used on someone who is not willing. “A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” said Huth.
While AI is still developing, this is a good example of AI technology being used only for good. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Jerry Tang, a doctoral student in computer science and a co-lead of the study said in the press release. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
YOU MIGHT ALSO LIKE:
New Study: Reading Minds Could Become a Reality Thanks to AI
This Implant Could Restore The Voices of People Who Lost Theirs
Project Revoice Gives People With ALS Their Voice Back