LLMorphism: When Humans Come to See Themselves as Language Models
#AI

LLMorphism: When Humans Come to See Themselves as Language Models

AI & ML Reporter
4 min read

A new paper introduces 'LLMorphism' - the potentially problematic tendency to view human cognition through the lens of large language models, examining how this bias emerges and its implications for society.

LLMorphism: When Humans Come to See Themselves as Language Models

In a newly published paper, researcher Valerio Capraro introduces "LLMorphism" - a cognitive bias that may emerge as large language models become increasingly prevalent in our daily lives. The paper, available on arXiv, explores how people might begin to view human cognition through the architecture and limitations of LLMs, potentially distorting our understanding of human thought and consciousness.

What is LLMorphism?

LLMorphism is defined as the biased belief that human cognition fundamentally works like a large language model. This bias stems from a reverse inference: when artificial systems produce human-like language, people may conclude that if LLMs can speak like humans, perhaps humans think like LLMs.

The author argues this inference is problematic because similarity at the level of linguistic output does not necessarily imply similarity in underlying cognitive architecture. Just because an LLM can generate coherent text doesn't mean human thinking operates through the same mechanisms of statistical prediction and pattern matching.

The Psychological Availability of LLMorphism

What makes this bias particularly relevant today is the rise of increasingly sophisticated conversational LLMs. Systems like GPT-4, Claude, and others now produce language that is often indistinguishable from human output in many contexts. This creates a psychological environment where the LLM-as-mind metaphor becomes more readily available.

The paper identifies two mechanisms through which LLMorphism may spread:

  1. Analogical transfer: Features of LLMs are projected onto humans, leading people to interpret human cognition through the lens of artificial neural networks. For example, viewing human memory as a retrieval system similar to a vector database or human thought as a process of statistical prediction.

  2. Metaphorical availability: LLM terminology becomes culturally salient for describing thought. We may start describing human creativity as "prompt engineering," learning as "fine-tuning," or decision-making as "token prediction."

Capraro carefully distinguishes LLMorphism from several related concepts:

  • Mechanomorphism: The tendency to view non-mechanical entities through mechanical metaphors. LLMorphism is more specific, focusing on language model architectures.

  • Anthropomorphism: Attributing human characteristics to non-human entities. LLMorphism works in reverse, attributing machine characteristics to humans.

  • Computationalism: The philosophical view that cognition is fundamentally computational. While related, LLMorphism is more specific and potentially more biased, assuming the particular architecture of LLMs.

  • Dehumanization and objectification: These involve denying human qualities to people, while LLMorphism involves inappropriately applying machine-like qualities to humans.

  • Predictive processing theories of mind: These are legitimate scientific theories about how the brain works, not the oversimplified architectural assumptions of LLMorphism.

Implications Across Domains

The paper explores several domains where LLMorphism may have significant consequences:

Work and Education

In workplace settings, LLMorphism might lead to devaluing human skills that don't fit the LLM paradigm, such as embodied knowledge, intuitive understanding, or creative leaps that don't follow predictable patterns. Education systems might overemphasize prompt engineering and statistical thinking while neglecting other forms of cognition.

Responsibility and Healthcare

When viewing humans as LLM-like systems, questions of moral responsibility become complicated. If we see humans as simply "generating responses" based on training data, concepts of free will and moral agency may be undermined. In healthcare, this could lead to oversimplified models of mental health that focus solely on "pattern matching" rather than the complex embodied, social, and historical dimensions of human psychology.

Communication and Creativity

LLMorphism might affect how we understand communication itself. If we view language as merely statistical prediction, we might neglect the embodied, emotional, and relational aspects of human communication. Similarly, creativity might be reduced to "pattern combination" rather than seen as involving genuine insight, emotion, and experience.

Human Dignity

Perhaps most concerning, the paper suggests that LLMorphism poses risks to human dignity. By viewing ourselves as sophisticated pattern-matching machines, we may lose sight of what makes humans unique: our capacity for meaning-making, ethical reasoning, and consciousness itself.

Limitations and Resistance

The paper acknowledges limitations to the LLMorphism thesis. Not everyone will succumb to this bias, and cultural, educational, and individual differences may influence susceptibility. Capraro also discusses forms of resistance, including:

  • Developing critical media literacy about AI
  • Emphasizing embodied and situated cognition in education
  • Creating alternative metaphors for human thought
  • Highlighting the differences between human and artificial cognition

The Missing Half of the Debate

The paper's most significant contribution may be its observation that the public debate about AI has focused primarily on one concern: whether we are attributing too much mind to machines. Capraro argues we should be equally concerned about the reverse: whether we are beginning to attribute too little mind to humans.

As LLMs become more sophisticated and integrated into our lives, the temptation to understand ourselves through their architecture grows. This paper serves as an important reminder that while these systems are powerful tools, they should not become the lens through which we view the complexity and richness of human cognition.

The full paper, "LLMorphism: When humans come to see themselves as language models," is available on arXiv and provides a thorough examination of this emerging cognitive bias and its potential implications for how we understand ourselves in an age of increasingly human-like AI.

Comments

Loading comments...