Scientists have invented a language decoder that can translate a person’s thoughts into text using an artificial intelligence (AI) transformer similar to ChatGPT, reports a new study.
The breakthrough marks the first time that continuous language has been non-invasively reconstructed from human brain activities, which are read through a functional magnetic resonance imaging (fMRI) machine. The decoder was able to interpret the gist of stories that human subjects watched or listened to—or even simply imagined—using fMRI brain patterns, an achievement that essentially allows it to read peoples’ minds with unprecedented efficacy. While this technology is still in its early stages, scientists hope it might one day help people with neurological conditions that affect speech to clearly communicate with the outside world.
However, the team that made the decoder also warned that brain-reading platforms could eventually have nefarious applications, including as a means of surveillance for governments and employers. Though the researchers emphasized that their decoder requires the cooperation of human subjects to work, they argued that “brain–computer interfaces should respect mental privacy,” according to
a study published on Monday in
Nature Neuroscience.
“Currently, language-decoding is done using implanted devices that require neurosurgery, and our study is the first to decode continuous language, meaning more than full words or sentences, from non-invasive brain recordings, which we collect using functional MRI,” said Jerry Tang, a graduate student in computer science at the University of Texas at Austin who led the study, in a press briefing held last Thursday.
“The goal of language-decoding is to take recordings of a user's brain activity and predict the words that the user was hearing or saying or imagining,” he noted. “Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes, or diseases like ALS.”
Tang and his colleagues were able to produce their decoder with the help of three human participants who each spent 16 hours in an fMRI machine listening to stories. The researchers trained an AI model referred to as GPT-1 in the study on Reddit comments and autobiographical stories in order to link the semantic features of the recorded stories with the neural activity captured in the fMRI data. This way, it could learn which words and phrases were associated with certain brain patterns.