In Gallant’s experiment, people were shown movies while the team measured their brain patterns. An algorithm then used those signals to reconstruct a fuzzy, composite image, drawing on a massive database of YouTube videos. In other words, they took brain activity and turned it into pictures, revealing what a person was seeing.
For Gallant and his lab, this was just another demonstration of their technology. While his device has made plenty of headlines, he never actually set out to build a brain decoder. “It was one of the coolest things we ever did,” he says, “but it’s not science.” Gallant’s research focuses on figuring out how the visual system works, creating models of how the brain processes visual information. The brain reader was a side project, a coincidental offshoot of his actual scientific research. “It just so happens that if you build a really good model of the brain, then that turns out to be the best possible decoder.”
Right now, in order for Gallant to ‘read’ thoughts, one has to slide into a functional magnetic resonance imaging (MRI) machine – a huge, expensive device that measures where the blood is flowing in the brain. While MRI is one of the best ways to measure the activity of the brain, it’s not perfect, nor is it portable. Subjects in an MRI machine can’t move, and the devices are expensive and huge.
And while comparing the brain image and the movie image side by side makes their connection apparent, the image that Gallant’s algorithm can build from brain signals isn’t quite like peering into a window. The resolution on MRI scans simply isn’t high enough to create something that generates a clear picture. “