Mindreading AI creates images from brainwaves

In a article published in Scientific reports earlier this year, researchers from Radboud University in the Netherlands, led by a doctoral candidate Thirza Dado, have combined noninvasive brain imaging and AI learning models in an effort to read people’s minds, or at least recreate the image they are looking at. It’s a fascinating experience, even if it’s easy to exaggerate success. Still, mind-reading AI may not be as far off as we think.

fMRI and AI Imaging

Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique used to detect brain activity by measuring changes in blood flow to different areas of the brain. It has been used for several decades identify which parts of the brain are responsible for which functions.

In this study, Dado’s team went further. They used an AI model (specifically a Generative Adversarial Network, or GAN) to try to interpret the fMRI results and convert the readings back into an image. The results are quite impressive.

A trained AI

“Stimulus-reconstructions. The three blocks show twelve examples of arbitrarily chosen but representative test sets. The first column displays the facial stimuli while the second and third columns display the corresponding reconstructions from the brain activations of subjects 1 and 2, respectively.

Thirza Dado/Radboud University/Scientific Reports

In the study, Dado’s team showed participants undergoing fMRI 36 generated faces repeated 14 times for the test set and 1,050 faces generated for the training set (over nine sessions).

Using fMRI data from the 1,050 unique faces, they trained the AI ​​model to convert brain imaging results into real images. (It works like a more primitive version of DALL-E 2 or Stable Diffusion.)

The results of the study are therefore based on the AI ​​model’s interpretation of the fMRI data of the 36 faces in the test set. You can see a sample above. The image in the first column is the target image, and the images in the second and third columns are the results generated by the AI ​​from the two subjects.

Is it mind reading?

While it’s easy to pick out a few examples where the image (re)created by the AI ​​closely matches the target image, it’s hard to call this mind reading. The study results measured the accuracy of the AI ​​by matching gender, age, and pose, as well as whether the generated face was wearing glasses, and whether the generated face was smiling, not whether the generated face was or was not recognizable as the target.

It’s also important to note that the AI ​​was trained on the test subjects’ fMRI data. If you or I were to jump into an fMRI machine, the results would probably be incredibly scattered. We are still a long way from being able to accurately read anyone’s mind, with or without a car-sized scientific instrument. Still, it’s fascinating to see how AI tools and machine learning can play a role in other areas, rather than just win art contests.

Comments are closed.