Now, a “brain decoder” powered by AI can read your thoughts with startling accuracy.

Senior correspondent for Future Perfect at Vox and co-host of the Future Perfect podcast Sigal Samuel. She focuses on the development of consciousness and the profound ethical ramifications of developments in neuroscience and artificial intelligence. Sigal worked as the religion editor at the Atlantic before to joining Vox.
According to a study published in Nature, researchers from the University of Texas at Austin have created a method that can convert people’s brain activity—such as the unsaid thoughts circling in our minds—into actual speech.
By placing electrodes in the brain and using an algorithm to transform the activity read from the electrodes into text on a computer screen, researchers have previously demonstrated that they can understand nonverbal communication. However, that strategy necessitates surgery and is very invasive. Only a select group of patients, such as those who had paralysis, for whom the benefits outweighed the costs, found it appealing. As a result, scientists created methods without implant surgery. They had the ability to decode fundamental brain functions.
Now that we have a non-invasive brain-computer interface (BCI), someone else can read the broad strokes of what we are thinking even if we haven’t said a single word.
How is that even doable?
It all comes down to combining two technologies: ChatGPT-like huge AI language models and fMRI scans, which quantify blood flow to various brain regions.
In the University of Texas study, three individuals listened to 16 hours of narrative podcasts like The Moth as researchers monitored changes in the blood flow to their brains using an fMRI machine. The scientists were able to associate a phrase with how each person’s brain appears when utilising this data and an AI model.
The scientists also employed a language model, especially GPT-1, to reduce the large amount of potential word sequences to well-formed English and forecast which words are likely to appear next in a sequence because many of them would be gibberish.
The end result is a decoder that, despite making mistakes occasionally, understands the general meaning. For instance, while inside the fMRI machine, participants were instructed to visualise narrating a narrative. They later spoke it aloud so the scientists could assess how closely the original story and the decoded version corresponded.
The decoder began to work when the participant thought, “Look for a message from my wife saying that she had changed her mind and was coming back,”
Here’s another illustration. The decoder interpreted what the participant believed, “Coming down a hill at me on a skateboard and he was going really fast and he stopped just in time,” as “He couldn’t get to me fast enough, he drove straight up into my lane and tried to ram me.”
Although it is not a word-for-word translation, the general meaning is largely kept. This signifies a development that is well beyond what prior brain-reading technology was capable of, and it also poses important ethical concerns.
Brain-computer interfaces’ harrowing ethical ramifications
That this is not a scene from a Neal Stephenson or William Gibson novel may be difficult to believe. Yet people’s lives are already being altered by this technology. Several paralysed patients have received brain implants over the past 12 years that enable them to move a computer cursor or command robotic arms with their thoughts.
The BCIs being developed by Elon Musk’s Neuralink and Mark Zuckerberg’s Meta could one day allow you to operate your phone or computer just with your thoughts. These BCIs could capture thoughts directly from your neurons and transform them into words in real time.
Commercialization of non-invasive, even portable BCIs that can read thoughts is still years away because they are cumbersome to carry around.
Is that advantageous? Like many cutting-edge breakthroughs, this one has the potential to cause significant ethical problems.
Let’s begin by stating the obvious. The last remaining privacy frontier is our minds. They serve as the foundation for our sense of self and our most private ideas. What is ours to govern if not the priceless three pounds of goo within our skulls?
Is that advantageous? Like many cutting-edge breakthroughs, this one has the potential to cause significant ethical problems.
Let’s begin by stating the obvious. The last remaining privacy frontier is our minds. They serve as the foundation for our sense of self and our most private ideas. What is ours to govern if not the priceless three pounds of goo within our skulls?
Imagine a situation where businesses have access to individuals’
By reaching straight to the source, the consumer’s brain, they can obtain considerably more accurate information. By examining how viewers’ brains respond to advertisements, advertisers in the burgeoning discipline of “neuromarketing” are already making an effort to achieve this goal. If advertisers had access to vast amounts of brain data, you can experience strong urges to purchase particular things without fully understanding why.
Or picture a situation where law enforcement or the government uses BCIs for surveillance or questioning. In a world where law enforcement has the authority to listen in on your thoughts without your permission, the US Constitution’s prohibition against self-incrimination may lose all of its value. It’s an instance that brings to mind the science fiction film Minority Report, in which a specialised police unit
Some neuroethicists contend that before these technologies are implemented, we need new human rights legislation to safeguard us from their potential for abuse.
Author of The Battle for Your Brain Nita Farahany told me, “This research demonstrates how quickly generative AI is making it possible for even our thoughts to be read.” “We need to protect humanity with a right to self-determination over our brains and mental experiences before neurotechnology is used on a large scale in society.”