stub Computer Uses Human Brain Signals to Model Visual Perception - Unite.AI
Connect with us

Brain Machine Interface

Computer Uses Human Brain Signals to Model Visual Perception



In what is the first of its kind, researchers at the University of Helsinki have demonstrated a new technique where a computer monitors human brain signals in order to model visual perception. In other words, the computer attempts to recreate what a human is thinking about in their head. This newly developed technique results in the computer being able to produce completely new information and fictional images that had never appeared before. 

The new study was published in September in the Scientific Reports journal, which is an open-access online journal covering multiple disciplines.

The researchers based the technique on a novel brain-computer interface, which traditionally is only capable of one-way communication from the brain to the computer. This results in for example, letters being spelled or a cursor being moved. 

The work was the first to demonstrate both the computer’s presentation of the information and brain signals being modelled at the same time through the use of artificial intelligence (AI) methods. Human brain responses and a generative neural network interacted and generated images that represented the visual characteristics of what participants were focusing on.

Neuroadaptive Generative Modeling

The method is called neuroadaptive generative modeling, and its effectiveness was tested with 31 participants. These participants were shown hundreds of AI-generated images of a diverse range of people, and the participants’ EEG was recorded while viewing the images.

The participants were told to focus on certain features in the images, like distinct faces and expressions. They then were rapidly presented a series of face images while the EEGs were fed to a neural network. This neural network then inferred whether an image was detected by the brain as matching what the participants were focusing on.

Using this data, the neural network was able to make an estimation about what kind of faces people were thinking of, and the computer-generated images were evaluated by the participants. The results demonstrated that the images almost perfectly matched what they were focusing on, and the accuracy rate of the experiment was 83%. 

Tuukka Ruotsalo is an Academy of Finland Research Fellow at the University of Helsinki, Finland, as well as an Associate Professor at the University of Copenhagen, Denmark.

“The technique combines natural human responses with the computer's ability to create new information. In the experiment, the participants were only asked to look at the computer-generated images. The computer, in turn, modelled the images displayed and the human reaction toward the images by using human brain responses. From this, the computer can create an entirely new image that matches the user's intention,” says Ruotsalo.

Other Potential Benefits

Besides generating images of the human face, this new study demonstrated how computers could augment human creativity.

“If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create,” Ruotsalo says. However, the researchers believe that the technique may be used to gain understanding of perception and the underlying processes in our mind.

“The technique does not recognise thoughts but rather responds to the associations we have with mental categories. Thus, while we are not able to find out the identity of a specific ‘old person' a participant was thinking of, we may gain an understanding of what they associate with old age. We, therefore, believe it may provide a new way of gaining insight into social, cognitive and emotional processes,” says Senior Researcher Michiel Spapé.

Spapé also believes that these results could be used within psychology.

“One person's idea of an elderly person may be very different from another's. We are currently uncovering whether our technique might expose unconscious associations, for example by looking if the computer always renders old people as, say, smiling men.”


Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.