AI skeptics argue that language models shouldn't act human-like, but the truth is that personalities are essential for building capable AI systems that produce useful, ethical outputs.
When AI skeptics argue that language models shouldn't act human-like, they're missing a fundamental truth about how these systems actually work. The idea that AI should "just be tools" like calculators or search engines sounds reasonable on the surface, but it fundamentally misunderstands the engineering reality behind modern AI systems.
The Base Model Problem
When you train an AI model on raw data, you don't get a useful assistant. You get what's called a "base model" - a statistical entity that's more like a chaotic mirror of its training data than a helpful tool. This base model can produce almost anything: coherent text, complete nonsense, brilliant code, or horrifying racist screeds. It has no inherent sense of right or wrong, correct or incorrect.
The base model is essentially a "mysterious gestalt of its training data." Feed it text, and it might continue in that vein - or it might start outputting pure gibberish. It won't naturally avoid security flaws in code or produce well-written English. It simply outputs based on statistical patterns, without judgment.
Why Personalities Are Essential
To build a useful AI model, you need to "journey into the wild base model and stake out a region that is amenable to human interests." This means giving the model a personality during post-training. Just as human beings are capable of almost any action but only take a tiny subset based on who we are, AI systems need that same kind of constraint.
Consider this: I could throw my coffee cup against the wall right now, but I don't because I'm not the kind of person who needlessly makes a mess. Similarly, Claude could respond to your question with incoherent racist abuse - the base model is more than capable of those outputs - but it doesn't because that's not the kind of "person" it's been trained to be.
This is why it's surprisingly difficult to "just" change a language model's personality or opinions. You're navigating through the near-infinite manifold of the base model, and while you can control which direction you go, you can't control what you find there. Small nudges can have unpredictable effects - as demonstrated when attempts to adjust Grok's views on South African politics caused it to start calling itself "Mecha-Hitler."
Technical Reality vs. Philosophical Debate
When AI researchers talk about LLMs having personalities, wanting things, or even having souls, these are technical terms - similar to how we talk about a computer's "memory" or a car's "transmission." You cannot build a capable AI system that "just acts like a tool" because the model is trained on humans writing to and about other humans. You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts.
This isn't a marketing ploy or philosophical mistake. It's engineering reality. Anthropic has published papers on this dating back to 2022, but the understanding hasn't yet penetrated into communities that are more skeptical of AI.
The Capability Connection
The human-like nature of AI systems isn't separate from their capabilities - it's intimately connected to them. My observation is that Claude "feels better" to use than ChatGPT because it has a more coherent persona, largely due to Amanda Askell's work on its "soul." If you tried to make a "less human" version of Claude, it would likely become rapidly less capable.
This engineering reality explains why AI labs give their models human-like characteristics. It's not about tricking users into emotional investment or because engineers are "delusional true believers in AI personhood." It's because that's the best way to build a capable AI system that produces useful, ethical outputs.
The next time you hear someone argue that AI should stop pretending to be human, remember: those personalities aren't a bug, they're a feature - and they're essential to making AI systems that actually work for human needs.

Comments
Please log in or register to join the discussion