A digital twin of the visual cortex in a mouse brain will allow neurological experiments to be conducted and enables fast and efficient virtual tests to study brain activity * Action movies shown to mice helped improve data for artificial intelligence

Stanford University researchers have used artificial intelligence to create a “digital twin” of a mouse’s visual cortex that can predict brain activity in response to novel visual stimuli. These digital models could revolutionize neuroscience by enabling limitless and efficient virtual experiments and revealing how the brain organizes information.
Creating a digital twin for the mouse brain
Stanford University researchers have created a digital twin of a mouse brain. Just as pilots use flight simulators to safely practice complex maneuvers, scientists may soon be able to conduct experiments on a highly realistic simulation of the mouse brain. In a new study, researchers at Stanford Medicine and their collaborators have developed an artificial intelligence model to create a “digital twin” of a mouse’s visual cortex—the area of the brain responsible for processing visual information.
The training process and predictive ability
The digital brain twin was trained on vast datasets of neuronal activity recorded from real mice while they watched movie clips. Upon completion of training, the model was able to accurately predict how tens of thousands of neurons would respond to new images and videos.
Digital twins can facilitate the investigation of brain activity and make the experimental process more efficient.
“If you build a model of the brain that is very accurate, it means you can do a lot more experiments,” said Andreas Tullias, a professor of ophthalmology at Stanford Medicine and lead author of the paper published April 10 in the journal Nature. “The most promising experiments can be tested in a real brain.”
Ability to generalize and work with new data
Unlike previous AI models of the visual cortex, which could only simulate the brain's response to the types of stimuli exposed during training, the new model is able to predict the brain's response to a wide range of new visual input. It can even infer anatomical properties of each neuron.
The model is an example of a “basic model” – a relatively new type of artificial intelligence model that can learn from large datasets, and then apply the knowledge to new tasks and new types of data – or as the researchers define it, “generalization beyond the distribution that the model learned during training.”
(A well-known example of a basic model is ChatGPT, which learns from vast amounts of text to understand and create new text.)
“In many ways, the seed of intelligence is the ability to generalize correctly,” said Tolias. “The ultimate goal—the Holy Grail—is to generalize to scenarios beyond the distribution learned during training.”
Movies for mice
To train the new AI model, the researchers first recorded the brain activity of real mice as they watched movies—movies designed for humans. The idea was that these movies would best simulate what the mice might see in natural environments.
“It’s very difficult to get a realistic movie for mice, because nobody makes Hollywood movies for mice,” Tolias said. But action movies were good enough for the study.
Mice have low-resolution vision – similar to our peripheral field vision – which means they mainly perceive motion rather than detail or color. “The visual system of mice responds strongly to motion stimuli, so we showed them movies with a lot of action,” Tolias explained.
In several short screenings, the researchers recorded over 900 minutes of brain activity from 8 mice that watched action-packed movie clips, such as the movieMad Max During the screenings, cameras monitored the mice's eye movements and behavior.
The researchers used the attached data to train a core model, which could be adapted to each individual mouse's digital twin through additional training.
Accurate predictions
The digital twins were able to accurately simulate the neural activity of their biological siblings in response to a variety of novel visual stimuli—including videos and still images. The massive amount of data used for training was key to their success, as Tolias claims: “They were impressively accurate because they were trained on such huge datasets.”
Although the models were trained exclusively on neuronal activity, they were able to generalize to other types of data. The digital twin of a particular mouse was able to predict the anatomical locations and cell types of thousands of neurons in the visual cortex, as well as the connections between them.
The researchers verified these predictions using high-resolution electron microscopy images of the visual cortex of the same mouse, part of a larger project to map the structure and function of the mouse visual cortex at a level never seen before. The results of this project, known as MICrONS, were published simultaneously in Nature.
Opening the black box
Because a digital twin can function well beyond the lifespan of a mouse, it will be possible to perform an almost unlimited number of experiments on it—in fact, on the same animal. Experiments that used to take years can be performed in hours, and millions of experiments can run in parallel, accelerating research into how the brain processes information and the principles of intelligence.
“We are trying to open the black box, so to speak, to understand the brain at the level of individual neurons or populations of neurons and how they integrate to encode information,” said Tolias.
In fact, the new models are already providing new insights. In a related study, published simultaneously in Nature, researchers used digital twinning to discover how neurons in the visual cortex select the other neurons with which they make connections.
Scientists already knew that similar neurons tend to form connections – similar to how people form friendships. The digital twin revealed which images are most important: Neurons prefer to connect with neurons that respond to the same stimulus – for example, the color blue – over neurons that respond to the same area of visual space.
“It’s like choosing friends based on what they like rather than where they are,” Tolias said. “We’ve learned a more precise rule about how the brain is organized.”
Future research directions
The researchers plan to expand their models to additional brain regions and different animals, including primates with more advanced cognitive abilities.
“Ultimately, I believe it will be possible to build digital twins for at least parts of the human brain,” Tolias said. “This is just the tip of the iceberg.”
The lead author of the study (the article) is Eric Wang, PhD, a medical student (doctor of medicine) at Baylor College of Medicine.
More of the topic in Hayadan: