stub Johns Hopkins Engineers Use AI for Deeper Look Into Brains of Mice - Unite.AI
Connect with us

Artificial Intelligence

Johns Hopkins Engineers Use AI for Deeper Look Into Brains of Mice

Updated on

A group of biomedical engineers at Johns Hopkins has developed an artificial intelligence (AI) training strategy to gain a deeper understanding of the brains of mice. The new strategy captures images of mouse brain cells as they are active. 

According to the team, the AI system can be used alongside specialized ultra-small microscopes to detect exactly where and when cells are activated during movement, learning, and memory. By collecting insightful data with this new strategy, scientists could eventually understand how the brain functions and is impacted by disease. 

The new research was published in the journal Nature Communications

Xingde Li, Ph.D., is a professor of biomedical engineering at Johns Hopkins University School of Medicine. 

“When a mouse's head is restrained for imaging, its brain activity may not truly represent its neurological function,” says Li. “To map brain circuits that control daily functions in mammals, we need to see precisely what is happening among individual brain cells and their connections, while the animal is freely moving around, eating and socializing.”

Gathering Data With Ultra-Small Microscopes

The team set out to gather the detailed data by creating ultra-small microscopes that can be placed on the top of the heads of the mice. With that said, the microscopes are only a couple millimeters in diameter, so they limit the amount of imaging technology that can be carried. The mouse’s breathing or heart rate could also affect the accuracy of the data captured by the microscope, so researchers estimate that they would need to exceed 20 frames per second to eliminate such disturbances.

“There are two ways to increase frame rate,” Li says. “You can increase scanning speed and you can decrease the number of points scanned.” 

The engineering team previously carried out research where they reached the physical limits of the scanner at six frames per second. In the second strategy, they increased frame rate by decreasing the number of points scanned. This strategy caused the microscope to capture lower resolution data. 

From Blurry to Bright: AI Tech Helps Researchers Peer into the Brains of Mice

Training an AI Program

According to Li’s hypothesis, an AI program could be trained to recognize and restore the missing points, which would result in higher resolution. However, one of the major challenges of such an approach is that there is a lack of similar images of mouse brains to train the AI against. 

The team set out to overcome this by developing a two-stage training strategy. The first trained the AI to identify the building blocks of the brain from images of fixed samples of mouse brain tissues. They then trained the AI to recognize the building blocks in a head-restrained living mouse that was under the ultra-small microscope. This new technique enabled the AI to recognize brain cells with natural structural variation and the motion caused by the movement of the mouse’s breathing and heartbeat. 

“The hope was that whenever we collect data from a moving mouse, it will still be similar enough for the AI network to recognize,” says Li

The researchers tested the AI program to determine if it could accurately enhance mouse brain images by incrementally increasing the frame rate. They found that the AI could restore the image quality up to 26 frames per second. 

To tell how the AI tool would perform with a mini microscope attached to a mouse, the researchers were able to look at the activity spikes of individual brain cells activated by the mouse moving around in its environment. 

“We could never have seen this information at such high resolution and frame rate before,” says Li. “This development could make it possible to gather more information on how the brain is dynamically connected to action on a cellular level.”

According to the team, the AI program could undergo more training to accurately interpret images up to 104 frames per second. 


Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.