The system was trained on people listening to stories. It decoded words from the brain data recorded while people were watching a silent video. For some of the videos we used, we actually had these audio description tracks. So when we decoded words in those movies, we could then compare it to the audio description tracks and ask, like, was it similar statistically? And it was. It was actually decently correlated with what was in the audio description track. But surprisingly, it really does kind of describe events that are happening in the movie,. I was quite surprised by this, and I think it's pretty cool.
For the first time, researchers have found a way to non-invasively translate a person’s thoughts into text. Using fMRI scans and an AI-based decoder trained on a precursor to ChatGPT, the system can reconstruct brain activity to interpret the gist of a story someone is listening to, watching or even just imagining telling. Ian Sample speaks to one of the team behind the breakthrough, the neuroscientist Dr Alex Huth, to find out how it works, where they hope to use it, and whether our mental privacy could soon be at risk. Help support our independent journalism at
theguardian.com/sciencepod