THURSDAY, Sept. 22 (HealthDay News) — Researchers are getting closer to scanning the human brain, decoding what they find there, and then replaying the “movies in our heads” for all to see.
Ripping a page from the world of sci-fi, scientists from the University of California, Berkeley have used functional magnetic resonance imaging (fMRI) scans to take a high-tech peek into the blood flow and neural signaling patterns of an individual’s brain.
In turn, they have created a computer program that carefully “reads” and decodes these patterns, then reconstructs them back into images that approximate what the individual had just seen.
In this case, the stimulus was a collection of Hollywood movie trailers. The UC Berkeley team say their ability to discern and recreate images based on computational analysis of cranial activity is “a major leap” toward mind-reading.
“We are opening a window into the movies in our minds,” UC Berkeley neuroscientist and study co-author Jack Gallant said in a university news release.
Gallant and his colleagues published their report in the Sept. 22 issue of Current Biology.
This is not the first time Gallant’s team has attempted to gain access into the workings of the brain’s visual cortex.
In prior work, the group put together a computer algorithm to track and then predict brain activity sparked by the viewing of a series of black and white photographs. That effort enabled them to identify whatever picture each test subject was shown simply by “reading” their visual cortex map.
This time around, Gallant’s group took their work to a new level, using themselves as guinea pigs in the process.
While spending hours remaining motionless inside an fMRI machine, three research team members were first subjected to a set of movie trailers as the machine recorded measurements of blood flow through the visual cortex. Neural activity was also monitored.
The readings were then fed into a computer that mapped out the brain as a collection of three-dimensional cubes, or “voxels.” All relevant activity was selectively ascribed to individual voxels as they reacted to images of various shapes and motions. This enabled the team to construct a model for how various parts of the brain process particular visual information.
The same team members were then exposed to a second series of film clips while inside the fMRI machine. After recording brain activity again, the computer model was fed another 18 million seconds of random videos taken from YouTube.
In the end, the computer was able to successfully sift through all the digested information, mining for the 100 specific visual clips that it deemed most similar to the imagery the team members had seen.
The result: a blurry but accurate and flowing reconstruction of the movie trailers the team volunteers had viewed.
“Our natural visual experience is like watching a movie,” UC Berkeley post-doctoral candidate and study lead author Shinji Nishimoto explained. “[And] in order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”
So far, so good, the study indicates. The team envisions a time when such technological innovation could help the verbally impaired (such as coma and stroke patients) and physically impaired (such as paralysis patients).
But the researchers still face daunting hurdles.
“We need to know how the brain works in naturalistic conditions,” Nishimoto acknowledged, noting that he and his colleagues will next aim to unravel the mysteries of how the brain processes the random visuals of daily life.
More information
For more on the brain, visit Harvard’s Whole Brain Atlas.