Baylor College of Medicine researchers discovered that the hippocampus processes language and predicts upcoming words even in patients fully anesthetized, challenging long-held assumptions about consciousness and opening new opportunities for brain-computer interfaces, speech prosthetics, and artificial intelligence development.
Baylor College of Medicine researchers have identified that the human brain performs sophisticated language processing and predictive coding while in an unconscious state induced by general anesthesia. The findings, published in the latest edition of Nature, upend long-standing assumptions about the relationship between consciousness and cognitive function, with potential applications for brain-computer interfaces (BCIs), speech prosthetics, and artificial intelligence development.

The study was led by Dr. Sameer Sheth, professor and Cullen Foundation Endowed chair of neurosurgery at Baylor College of Medicine, a McNair Scholar, and director of the Gordon and Mary Cain Pediatric Neurology Research Foundation Laboratories within the Duncan Neurological Research Institute at Texas Children’s Hospital. Sheth and his team worked with patients undergoing epilepsy surgery, a unique cohort that allowed direct access to the hippocampus, a brain region traditionally associated with memory formation and retrieval rather than language processing.
To record neural activity, the team used Neuropixels probes, a high-density neural recording technology that had not previously been applied to the hippocampus. Neuropixels probes can capture signals from hundreds of individual neurons simultaneously, providing granular data on how specific cells respond to stimuli. Previous research on language processing under anesthesia relied on coarse measures like EEG, which cannot track individual neuron behavior, making this the first study to observe single-neuron language processing in unconscious patients.
The research proceeded in two phases. First, patients were exposed to repetitive tones interrupted by occasional distinct sounds. The team found that hippocampal neurons could reliably distinguish the unusual tones, and this ability improved over the course of the experiment, indicating a form of neural plasticity, or learning, occurring entirely without conscious awareness. This result alone challenged the assumption that learning requires wakefulness, as anesthesia was previously thought to suppress all adaptive neural changes.
The second phase tested more complex language processing. Researchers played short stories to anesthetized patients while recording hippocampal neural activity. They found that the hippocampus could differentiate parts of speech, including nouns, verbs, and adjectives, based on distinct patterns of neuron firing. Even more surprisingly, neural signals predicted upcoming words in sentences, a behavior known as predictive coding.
“The brain appears to anticipate what comes next in a story, even without conscious awareness,” Sheth said. “This kind of predictive coding is something we associate with being awake and attentive, yet it’s happening here in an unconscious state.”
Dr. Benjamin Hayden, professor of neurosurgery at Baylor and a co-author of the study, noted that these results suggest cognitive functions like language comprehension and prediction do not require consciousness. Instead, consciousness likely depends on coordinated activity across multiple brain regions, rather than the function of any single structure like the hippocampus. This reframes how researchers understand disorders of consciousness, including coma, vegetative states, and the effects of general anesthesia, which may suppress regional coordination rather than individual region function.
The findings also draw a direct parallel to artificial intelligence systems, specifically large language models (LLMs) that power tools like ChatGPT. LLMs generate text by predicting the next most likely token in a sequence, a process nearly identical to the predictive coding observed in the anesthetized hippocampus. The study suggests that biological and artificial systems use similar information processing frameworks, which could help researchers improve AI efficiency or design more biologically plausible models. It also provides a new avenue for testing AI theories against biological reality, as the hippocampus offers a clear, measurable system for studying predictive processing.
For the neurotechnology sector, the results open new possibilities for BCIs and speech prosthetics. Current speech BCIs require patients to be awake and able to focus on generating cortical signals, limiting their use to people with intact cortical function. If the hippocampus can generate usable language signals even when a patient is unconscious or has cortical damage, these systems could be deployed for people who have lost the ability to speak due to stroke, traumatic brain injury, or neurodegenerative disease.
“Can we use these signals to deploy and run a speech prosthetic for some of the parts of the brain that are damaged by stroke or injury? These are questions that we can now consider in relation to this part of the brain,” said Dr. Vigi Katlowitz, the study’s first author and a neurosurgery resident at Baylor.
The research was funded in part by the National Institutes of Health (grant U01 NS121472), the McNair Foundation, and the Gordon and Mary Cain Pediatric Neurology Research Foundation. Additional support came from the Optical Imaging & Vital Microscopy Core at Baylor College of Medicine. The study involved 20 co-authors across multiple institutions, including Texas Children’s Hospital, Massachusetts General Hospital, and the University of Minnesota.
The authors note several limitations to the work. The findings apply only to the specific type of general anesthesia used in the study, and may not translate to other unconscious states such as sleep, coma, or different anesthetic regimens. The study also focused exclusively on the hippocampus, so it is unclear how widespread similar unconscious processing is across other brain regions. Follow-up research will need to test additional anesthetic types, brain regions, and patient cohorts to validate the results.
Sheth said the work pushes the field to rethink the definition of consciousness. “The brain is doing much more behind the scenes than we fully understand,” he said. For neurotech startups and AI researchers alike, the study provides a new foundation for building tools that interact with the brain’s unconscious processing capabilities, rather than relying solely on conscious cortical signals.

Comments
Please log in or register to join the discussion