Insights AI News How to track neurons in moving animals 600x faster
post

AI News

27 Feb 2026

Read 8 min

How to track neurons in moving animals 600x faster

Track neurons in moving animals with AI that aligns and labels cells in seconds, saving weeks of work.

MIT scientists built three AI tools that let labs track neurons in moving animals with single-pixel precision. BrainAlignNet aligns cells 600x faster at 99.6% accuracy, AutoCellLabeler names cells at 98%, and CellDiscoveryNet learns types on its own. Months of manual work now take seconds across worms and jellyfish. Watching a brain work while an animal moves is hard. Cells shift. Bodies bend. Old methods forced experts to label neurons by hand for hours per video. A new MIT study shows how three deep-learning tools align frames, name cells, and even discover cell types without help. The result is fast, accurate, and ready for many labs.

How AI helps track neurons in moving animals

BrainAlignNet: fast, pixel-precise alignment

BrainAlignNet solves the “where did that cell go?” problem. It registers pairs of images even when the head of a roundworm bends or a jellyfish deforms. It runs about 600 times faster than older pipelines. Tests show 99.6% accuracy at the single-pixel level. The team used it in C. elegans and also in the jellyfish Clytia hemisphaerica, which twists and stretches in many ways.

AutoCellLabeler: naming neurons on the fly

AutoCellLabeler answers “which neuron is this?” It learns from human-labeled examples and from multi-color genetic tags like NeuroPAL. It identified over 100 neuron types in the worm head at 98% accuracy. It still worked well with fewer colors, cutting the need for perfect staining. What once took trained staff up to five hours per animal now runs in seconds.

CellDiscoveryNet: unsupervised cell discovery

CellDiscoveryNet goes further. It compares data across many animals and clusters cells into types without any human labels. In tests, it matched expert performance at identifying more than 100 neuron types. This helps when labels are missing, dyes vary, or you want to map a nervous system at scale.

Why this matters for brain-and-behavior science

Neuroscientists want to link activity patterns to actions. That needs consistent cell tracking across time and across animals. These tools make that practical. – Speed: Process long videos in minutes, not months. – Accuracy: Single-pixel alignment and near-human (or better) labeling. – Scale: Apply the same pipeline across many animals and experiments. – Flexibility: Works in wiggly worms and deforming jellyfish; adapts to fewer color channels. – Cost: Replaces expensive, repetitive manual annotation with automation.

From data deluge to insight

Modern microscopes flood labs with images. People often must trade speed for accuracy, or vice versa. With BrainAlignNet, AutoCellLabeler, and CellDiscoveryNet, teams can keep both. They can align, identify, and compare neurons across whole datasets while preserving the details that matter for decoding circuits.

From worms to jellyfish—and beyond

The tools were built for transparent animals like C. elegans, where every neuron can glow. But the core ideas—non-rigid registration and learned cell identity—fit many imaging tasks. Researchers already used BrainAlignNet on jellyfish, where any part moves relative to any other. The same approach can help map cells in other organisms or in human tissue sections.

What it takes to put this to work

Data and labels

– Multi-spectral fluorescence improves labeling, but the system can handle fewer colors. – A small set of expert labels can bootstrap AutoCellLabeler. – For cross-animal mapping without labels, use CellDiscoveryNet.

Workflow basics

– Use BrainAlignNet to align frames and link cells over time. – Run AutoCellLabeler to assign neuron identities within each animal. – Apply CellDiscoveryNet to cluster and match cell types across animals.

Quality checks

– Spot-check a subset against expert labels. – Track uncertainty; review low-confidence calls. – Iterate with improved training data if needed.

What’s next

Teams are expanding cell labeling in jellyfish and building microscopes to image free-swimming animals. As more neurons are tagged and recorded in natural motion, these AI tools will help extract clean signals and stable identities, even when bodies bend, twist, and stretch. This study shows that you can now reliably track neurons in moving animals at scale. With pixel-level alignment, high labeling accuracy, and even unsupervised discovery, labs can bridge brain activity to behavior faster and more affordably than before.

(Source: https://neurosciencenews.com/ai-neuron-tracking-movement-30183/)

For more news: Click Here

FAQ

Q: How do these AI tools help researchers track neurons in moving animals? A: The tools automate alignment, annotation, and cross-animal cell discovery so researchers can follow individual fluorescent neurons as bodies bend and deform. BrainAlignNet aligns frames, AutoCellLabeler assigns cell identities with human-labeled training, and CellDiscoveryNet clusters cell types without labels, enabling much faster analysis than manual methods. Q: What is BrainAlignNet and how accurate is it? A: BrainAlignNet is a registration network that links neurons across image frames with single-pixel precision using a semi-supervised training approach. It runs about 600 times faster than older pipelines and achieved 99.6% accuracy in tests on C. elegans and was applied to Clytia hemisphaerica jellyfish. Q: How does AutoCellLabeler identify specific neuron types? A: AutoCellLabeler learns from human-annotated examples and multispectral genetic labels like NeuroPAL to assign identities to neurons in each image. It labeled over 100 cell types in the worm head with about 98% accuracy and still performed well with fewer color channels. Q: What can CellDiscoveryNet do without any human training data? A: CellDiscoveryNet performs unsupervised clustering across many animals to discover and match cell types without labeled examples. In the study it identified more than 100 cell types and matched the performance of trained human labelers. Q: How much time do these AI tools save compared to manual annotation? A: The study reports that tasks which once required trained staff up to five hours per animal can now run in seconds or minutes using the automated pipeline. More broadly, BrainAlignNet runs about 600 times faster than the lab’s prior methods and months of manual labor are replaced with near-instant analysis. Q: Are these tools limited to worms and jellyfish or can they be adapted to other tissues? A: While developed for transparent animals like C. elegans and used on Clytia hemisphaerica jellyfish, the authors say the core neural-network approaches should be straightforward to adapt to other large series of microscopic images, including human tissue samples. This adaptability makes them useful for many imaging contexts that need alignment and annotation of dense cell types. Q: What data and labels are needed to run each tool effectively? A: Multi-spectral fluorescence data improves labeling and a small set of expert annotations can bootstrap AutoCellLabeler, while CellDiscoveryNet can operate without labels for cross-animal mapping. In practice the recommended workflow is to use BrainAlignNet for frame alignment, then AutoCellLabeler to assign identities within animals, and CellDiscoveryNet to cluster and match types across animals. Q: How should labs check the accuracy of automated neuron tracking and labeling? A: The article recommends spot-checking a subset of automated labels against expert annotations, tracking uncertainty and reviewing low-confidence calls, and iterating with improved training data when needed. These quality checks help ensure single-pixel alignment and near-human labeling performance are maintained.

Contents