Reconstructing Vision With MRI

There was pretty cool news for neuroscience this week; my friend Jan sent out this video, which just confused/intrigued me, so I looked for the explanation (ABC news has a good one). 

Basically what researchers did was this: they put subjects into an MRI machine and had them watch hours of YouTube videos. They used functional MRI, which can detect changing oxygenation levels in blood and uses that as an indicator of increased neural activity in a brain region (since extra-active neurons will need extra glucose and oxygen delivered to them through the bloodstream). They created a computer model that made connections between the subject’s brain activity and the features of the videos they were watching, to basically map which part of the visual system was active in response to which kind of visual stimulus.

Then to test this map, they went backwards: they measured the brain activity from subjects watching a particular video. Then they ran a library of other YouTube videos backwards through their computer model to see which ones would produce brain activity similar to what they observed from the subject, and they overlaid these videos into an estimated reconstruction of the original video the subject watched. If their model worked, this reconstructed video should look similar to the original video, and as you can see in their demonstration, it did.

From ABC news:

California scientists have found a way to see through another person’s eyes.

Researchers from UC Berkeley were able to reconstruct YouTube videos from viewers’ brain activity — a feat that might one day offer a glimpse into our dreams, memories and even fantasies.

“This is a major leap toward reconstructing internal imagery,” said Jack Gallant, professor of psychology and coauthor of a study published today in Current Biology. “We are opening a window into the movies in our minds.”

… “If you can decode movies people saw, you might be able to decode things in the brain that are movie-like but have no real-world analog, like dreams,” Gallant said.

The brain activity measured in this study is just a fraction of the activity that lets us see moving images. Other, more complex areas help us interpret the content of those images — distinguish faces from lifeless objects, for example…

More models, Gallant said, mean better resolution. It also means a ton more data to analyze.

“We need really big computers,” Gallant said…

If the technology could be used to broadcast imagery, it could one day allow people who are paralyzed to control their environment by imagining sequences of movements. Already, brain waves recorded through electrodes on the scalp can flip a switch, allowing people with Lou Gehrig’s disease and other paralyzing conditions to choose letters on a computer monitor and communicate.

The possibilities are pretty cool. I have two things to note:

1) This study used the researchers themselves as subjects. This sounds pretty weird on its face, but it happens, and with the way their study worked I don’t think there’s really any room for them to have accidentally fudged the results through bias. The subjects didn’t have to make any choices or interpretations of anything.

2) It’s not described in the ABC article but it is on the authors’ website: the whole point of this study was to show that you can map people’s responses to quickly-moving images using fMRI, which is difficult since blood flow is many orders of magnitude slower than neural activity. They accomplished this through computational magic that I will not try to get into. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: