The Biological LLM: How the Brain's Language Network Reshapes Our Understanding of AI
Share this article
The Biological LLM: How the Brain's Language Network Reshapes Our Understanding of AI
Even in a world saturated with large language models and AI chatbots, many of us struggle to fully accept that fluent writing can emerge from an unthinking machine. That's because, for many, finding the right words feels inseparable from thought itself—not merely the product of some separate computational process.
But what if our neurobiological reality includes a system that behaves remarkably like an LLM? Long before the rise of ChatGPT, cognitive neuroscientist Ev Fedorenko began investigating how language operates in the adult human brain. The specialized system she has described—which she terms "the language network"—maps the correspondences between words and their meanings. Her research suggests that, in fundamental ways, we do carry a biological version of an LLM—a mindless language processor—within our own skulls.
"You can think of the language network as a set of pointers," Fedorenko explains. "It's like a map, and it tells you where in the brain you can find different kinds of meaning. It's basically a glorified parser that helps us put the pieces together—and then all the thinking and interesting stuff happens outside of [its] boundaries."
For the past 15 years, Fedorenko has been gathering biological evidence of this language network in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn't simply string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (speech, writing, sign language) and representations of meaning encoded elsewhere in the brain (including episodic memory and social cognition, capabilities that current LLMs lack).
Nor is the human language network particularly large: If all its tissue were clumped together, it would be about the size of a strawberry. Yet when damaged, its effects are profound. An injured language network can result in forms of aphasia where sophisticated cognition remains intact but trapped within a brain unable to express thoughts or distinguish incoming words.
From Linguistics to Neuroscience: The Polyglot Researcher
Fedorenko's fascination with language began in childhood. In the 1980s, growing up in the Soviet Union, her mother made her learn five languages (English, French, German, Spanish, and Polish) in addition to her native Russian. Despite significant hardships during the collapse of communism—Fedorenko "lived through a few years of being hungry"—she excelled academically and earned a full scholarship to Harvard University.
At Harvard, she initially planned to study linguistics but later added psychology as a second major. "The [linguistics] classes were interesting, but they felt kind of like puzzle-solving, not really figuring out how things work in reality," she recalls.
Three years into her graduate studies at MIT, Fedorenko pivoted again, this time into neuroscience. She began collaborating with Nancy Kanwisher, who had first identified the fusiform face area, a brain region specialized for facial recognition. Fedorenko wanted to find the equivalent for language.
"At that point, it was possible to read pretty much everything that was published [on the subject], and I thought the foundations were pretty weak," Fedorenka said. "As you can imagine, that [assessment] was not so popular with some people. But after a while they saw I was not going away."
Following a steady stream of findings, in 2024 Fedorenko published a comprehensive review in Nature Reviews Neuroscience defining the human language network as a "natural kind"—an integrated set of regions, exclusively specialized for language, that resides in "every typical adult human brain."
The Language Network as a Natural Kind
"There's a core set of areas in adult brains that acts as an interconnected system for computing linguistic structure," Fedorenko explains. "They store the mappings between words and meanings, and rules for how to put words together. When you learn a language, that's what you learn: You learn these mappings and the rules. And that allows us to use this 'code' in incredibly flexible ways."
But what does she mean by calling it a "natural kind"? That term refers to something physical you can point to, much like the digestive system or the circulatory system.
"These systems that people have discovered [in the brain], including the language network and some parts of the visual system, are like organs," Fedorenko clarifies. "For example, the fusiform face area is a natural kind: It's meaningfully definable as a unit. In the language network, there are basically three areas in the frontal cortex in most people. All three of them are on the side of the left frontal lobe. There's also a couple of areas that fall along the side of the middle temporal gyrus, this big hunk of meat that goes along the whole temporal lobe. Those are the core areas."
The unity of this network becomes apparent through brain imaging. When researchers place people in fMRI scanners and observe responses to language versus control conditions, these regions consistently activate together. Fedorenko's team has now scanned approximately 1,400 people, building probabilistic maps that estimate where these regions will typically be located.
"The topography is a little bit variable across people, but the general patterns are very consistent," she notes. "Somewhere within those broad frontal and temporal areas, everybody will have some tissue that is reliably doing linguistic computations."
Beyond Broca's Area: Redefining Language Processing
Fedorenko's research has significant implications for how we understand classical language areas in the brain, such as Broca's area.
"Broca's area is actually incredibly controversial," she states. "I would not call it a language region; it's an articulatory motor-planning region. Right now, it's being engaged to plan the movements of my mouth muscles in a way that allows me to say what I'm saying. But I could say a bunch of nonsense words, and it would be just as engaged. So it's an area that takes some sound-level representation of speech and figures out the set of motor movements you would need [to produce it]. It's a downstream region that the language network sends information to."
This distinction highlights a crucial aspect of Fedorenko's work: the separation between language processing and higher-level cognition. If the language network isn't producing speech and isn't directly involved in thinking, what exactly is its function?
"The language network is basically an interface between lower-level perceptual and motor components and the higher-level, more abstract representations of meaning and reasoning," Fedorenko explains.
She breaks down language into two fundamental processes:
Language production: "You have this fuzzy thought, and then you have a vocabulary—not just of words, but larger constructions, and rules for how to connect them. You search through it to find a way to express the meaning you're trying to convey using a structured sequence of words. Once you have that utterance, then you go to the motor system to say it out loud, write it or sign it."
Language comprehension: "It starts with sound waves hitting your ear or light hitting your retina. You do some basic perceptual crunching of that input to extract a word sequence or utterance. Then the language network parses that, finding familiar chunks in the utterance and using them as pointers to stored representations of meaning."
In both cases, the language network serves as a repository of form-to-meaning mappings—a fluid store that we continuously update throughout our lives.
The Limits of the Language Network
Fedorenko emphasizes that this system has significant limitations. "The language system I've characterized is 'memory-limited,'" she notes, "and only handles 'chunks of maybe eight to ten words, max.'"
This constraint becomes apparent when considering how the network processes both meaningful and nonsensical language. Fedorenko uses Noam Chomsky's famous example of a syntactically correct but semantically nonsensical sentence: "Colorless green ideas sleep furiously."
"You kind of know what it means, but you can't relate it to anything about the world because it doesn't make sense," Fedorenko explains. "We and a few other groups have evidence that the language network will respond just as strongly to those 'colorless green'–type sentences as it does to plausible sentences that tell us something meaningful. I don't want to call it 'dumb,' but it's a pretty shallow system."
This observation leads to a provocative comparison: the language network bears striking similarities to early LLMs.
"Pretty much," Fedorenko confirms when asked if there's essentially an LLM inside everyone's brain. "I think the language network is very similar in many ways to early LLMs, which learn the regularities of language and how words relate to each other. It's not so hard to imagine, right? I'm sure you've encountered people who produce very fluent language, and you kind of listen to it for a while, and you're like: There's nothing coherent there. But it sounds very fluent. And that's with no physical injury to their brain!"
Implications for AI and Cognitive Science
The parallel between the brain's language network and artificial LLMs offers valuable insights for both fields. For AI researchers, it suggests that the human approach to language processing, while biologically constrained, has evolved efficient mechanisms for handling the fundamental tasks of mapping forms to meanings and parsing hierarchical structures.
For cognitive scientists, Fedorenko's work reinforces the idea that language processing, while crucial for human communication, operates as a specialized module distinct from higher-order reasoning. This modular view challenges earlier theories that positioned language as inextricably intertwined with thought itself.
Fedorenko herself acknowledges that this perspective required updating her own beliefs. "When I started [this research], I thought that language is a really core part of high-level thought," she admits. "There was this notion that maybe humans are just really good at representing and extracting hierarchical structures, which of course are a key signature of language, but are also present in other domains like math and music and aspects of social cognition. So I was fully expecting that some parts of this network would be these very domain-general, hierarchical processors. And that just turns out empirically not to be the case."
The Future of Language Research
As AI systems continue to advance, the insights from neuroscience become increasingly valuable. The human language network, with its specialized yet flexible architecture, offers a biological blueprint for more efficient and robust language processing systems.
Fedorenko's ongoing research aims to further characterize this network, exploring questions about how individual cells within it respond to linguistic stimuli and how these responses scale up to produce coherent language understanding and production.
"There's a preprint, from Itzhak Fried's group at UCLA, looking at single cells and finding some of the same properties that we found with [fMRI] imaging and population-level intracranial recordings," Fedorenko notes. "For example, cells will respond to both written and auditory language in similar ways. And the language network is where you would look for those cells."
As we continue to develop increasingly sophisticated AI language models, the biological reality of our own language processing systems serves as both a guide and a reminder of the remarkable complexity of human cognition. The brain's language network—small, specialized, and operating at a scale far beyond current AI—reminds us that true understanding remains an elusive frontier, even for the most advanced artificial systems.
This research ultimately suggests that while we may create machines that can mimic language fluency, the biological machinery that evolved to support human communication continues to hold secrets about the fundamental nature of meaning itself.
Source: This article is based on "The Polyglot Neuroscientist Resolving How the Brain Parses Language" by John Pavlus, originally published in Quanta Magazine on December 5, 2025. Link to original article