Living Rat Neurons Trained to Perform AI Computations in Real-Time
#AI

Living Rat Neurons Trained to Perform AI Computations in Real-Time

Chips Reporter
3 min read

Japanese researchers have demonstrated that cultured rat cortical neurons can autonomously generate complex temporal signals through a closed-loop reservoir computing system, potentially opening new pathways for brain-machine interfaces.

A team of researchers from Tohoku University and Future University Hakodate in Japan has achieved a significant breakthrough in bio-computing by training living rat cortical neurons to perform real-time AI computations. The study, published March 12 in the journal Proceedings of the National Academy of Sciences, demonstrates that cultured neurons can autonomously generate complex temporal signals when integrated with machine learning frameworks.

Rat brain

The experimental setup involved integrating living neurons with high-density microelectrode arrays and microfluidic devices to create a closed-loop reservoir computing system. This system learned to produce periodic and chaotic waveforms without any external input, marking a notable advancement in the field of biological computing.

Technical Architecture and Implementation

The researchers employed a sophisticated experimental framework that recorded spike trains from neurons across a 26,400-electrode array with a 17.5-micrometer pitch. These signals were filtered into continuous signals and decoded through a linear readout layer. The output was then fed back to the neurons as electrical stimulation, completing a feedback loop that cycled roughly every 333 milliseconds.

The system utilized an algorithm called FORCE (First-Order Reduced and Controlled Error) learning, which continuously adjusted the decoder to minimize the error between the network's output and a target waveform. This real-time optimization allowed the living neural network to adapt its behavior dynamically.

Microfluidic Pattern Control

A critical enabling technology was the use of PDMS microfluidic films to constrain how the neurons connected. Without physical constraints, cultured neurons typically form dense, highly synchronized networks that fire in lockstep. These homogeneous networks failed to learn any of the target signals, highlighting the importance of controlled connectivity patterns.

Instead, the researchers confined neuronal cell bodies to 128 square wells, each roughly 100x100 micrometers, with each well holding an average of 14.6 neurons. The wells were linked by microchannels in two configurations: a lattice design with uniform nearest-neighbor connections, and a hierarchical design with sparser, multi-scale connections.

Both patterned configurations dramatically reduced pairwise neural correlations compared to unpatterned cultures (0.11 and 0.12 versus 0.45, respectively), increasing the dimensionality of the network's dynamics. Lattice networks consistently outperformed hierarchical ones across all target waveforms, likely because their denser intermodular connections produced higher firing rates that gave the linear decoder more signal to work with.

Computational Capabilities Demonstrated

The system successfully learned to generate sine waves with periods of 4, 10, and 30 seconds, as well as triangle and square waves. Remarkably, the same culture preparation could be retrained to oscillate at different frequencies, demonstrating the system's adaptability.

Beyond simple periodic signals, the researchers demonstrated that the system could approximate a Lorenz attractor, a three-dimensional chaotic trajectory, with pairwise correlations above 0.8 between predicted and target signals across all dimensions during the learning phase.

"This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources," said Hideaki Yamamoto, a professor at Tohoku University's Research Institute of Electrical Communication, in a press release published on the institution's website.

Limitations and Future Directions

Despite the promising results, the system faced several limitations. Performance degraded after training was halted and the system ran autonomously, with mean squared error increasing in 99% of trials. The feedback loop's roughly 330-millisecond latency also limited the system's ability to track fast-changing or sharp-edged waveforms.

The researchers noted that reducing this delay through specialized hardware or alternative filtering could expand the range of learnable targets. Future applications could potentially extend to brain-machine interfaces and neuroprosthetic devices, where living neural networks might provide more natural and adaptive computational capabilities.

This research represents a significant step toward understanding how biological neural networks can be harnessed for computational tasks, potentially bridging the gap between artificial and biological intelligence systems. The ability to pattern neuronal connectivity and train living networks to perform specific computational tasks opens new avenues for both fundamental neuroscience research and practical applications in bio-computing and neurotechnology.

Comments

Loading comments...