Business Standard

New AI system can decode your mind

The advance can aid efforts to improve AI and lead to new insights into brain function

Press Trust of India  |  Washington 

Photo: Shutterstock
Photo: Shutterstock

Scientists have developed a new system that can the human mind, and interpret what a person is seeing by analysing brain scans.

The advance could aid efforts to improve (AI) and lead to new insights into brain function.


Critical to the research is a type of algorithm called a convolutional neural network, which has been instrumental in enabling computers and smartphones to recognise faces and objects.

"That type of network has made an enormous impact in the field of computer vision in recent years," said Zhongming Liu, an assistant professor at Purdue University in the US.

"Our technique uses the neural network to understand what you are seeing," Liu said.

Convolutional neural networks, a form of "deep-learning" algorithm, have been used to study how the brain processes static images and other visual stimuli.

"This is the first time such an approach has been used to see how the brain processes movies of natural scenes - a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings," said Haiguang Wen, a doctoral student at Purdue University.

The researchers acquired 11.5 hours of Functional magnetic resonance imaging (fMRI) data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes.

The data was used to train the system to predict the activity in the brain's visual cortex while the subjects were watching the videos.

The model was then used to from the subjects to reconstruct the videos, even ones the model had never watched before.

The model was able to accurately the into specific image categories. Actual video images were then presented side-by-side with the computer's interpretation of what the person's brain saw based on

"I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs," said Wen, lead author of the study published in the journal Cerebral Cortex.

The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing.

"Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain's visual cortex," Wen said.

"By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene," he said.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Tue, October 24 2017. 19:15 IST
RECOMMENDED FOR YOU