AI Turns Thoughts Into Video

In a groundbreaking research paper published in the science journal Nature on May 3, a new AI tool has been developed which can translate brain signals into video.

In the pursuit of understanding how the brain controls behavior, scientists have long been studying the activity of brain cells and the actions of animals. By studying both, they’ve been hoping to gain a deeper insight into how the brain works and try to be able to predict behaviour. However, the methods previously used to do so have both limitations and haven’t yielded consistent results. In short, they needed a better way to analyze this information.

So, a team of scientists from the Swiss Federal Institute of Technology Lausanne developed a novel machine-learning algorithm called Cebra (pronounced “zebra”).

The new artificial intelligence tool was tested on rodents in order to reconstruct and predict what they saw, based on mapping their neural activity to specific frames in videos.

The research involved both direct and indirect measurements of the rodent’s brain activity, using electrode probes inserted into the visual cortex area of the brain, and optical probes on the genetically-modified mice that glowed green each time neurons were activated.

Cerba was able to convert this data into a movie of its own, with 95% accuracy. Leaps and bounds ahead of where this research has been in the last decade, where it was possible to decode only simple shapes, we are now at the point where it is possible to decode movies, frame by frame.

Not only was Cerba able to recreate what the mice were seeing, they were able to take a new mouse that they had never studied, and have it watch the same video and Cerba was able to predict, which frame the mouse was actually watching.

So, does this mean it’s possible to reconstruct what someone sees based on brain signals alone? Not yet, but the introduction of a new algorithm for building an artificial neural network that very accurately captures brain dynamics is a big step closer.