New research reveals that making AI chatbots seem friendlier and more personable is more effective at gaining user trust than improving their actual competence, raising concerns about manipulation and overtrust.
A new study published Monday reveals that the fastest way to make large language models (LLMs) feel human isn't making them smarter—it's making them seem nicer. The research, titled "Anthropomorphism and Trust in Human-Large Language Model Interactions," analyzed over 2,000 human-LLM interactions involving 115 participants, systematically tweaking how chatbots behaved across dimensions like warmth, competence, and empathy to determine what drives people to treat these systems as if they have minds of their own.

The findings are striking: warmth—essentially how friendly and personable the chatbot seems—"significantly impacted all perceptions of LLM," including anthropomorphism (treating the AI as human-like), trust, usefulness, similarity, frustration, and closeness. Competence, by contrast, still matters but in a more limited way: it "significantly impacted all perceptions except for anthropomorphism." In other words, competence makes the system seem useful and reliable, but it doesn't make it feel human.
The study's authors note that this creates a concerning dynamic. "Anthropomorphic attributions can increase user engagement, but can also produce overtrust and susceptibility to deception or manipulation." Make an AI sound human enough, and people start to buy in—even when the underlying system hasn't actually changed. The researchers found that simply turning up the warmth and adding apparent understanding causes users to start doing some of the work for the AI, filling in intent and competence that may or may not be there.
This matters because the implications extend far beyond casual chatbot interactions. When people start treating AI systems as human-like entities, they may overlook critical flaws or become more susceptible to manipulation. The study found that "subjective or personally meaningful topics (e.g., relationships, lifestyle) increased participants' sense of connection with the LLM." Talk to it about biology or history and it stays fairly dry; shift into relationships or day-to-day life and people start reacting to it differently.
There's also a quality issue at play. The researchers observed that too much friendliness without the substance to back it up can tip into "superficial agreeableness," which is a nice way of saying it starts to sound fake. This raises questions about the long-term viability of AI systems that prioritize seeming nice over actually being competent.
The study's findings suggest we're entering an era where the most successful AI systems may not be the smartest ones, but rather the ones that are best at making users feel understood and appreciated. This has significant implications for everything from customer service automation to therapeutic applications, where the line between helpful engagement and manipulative flattery becomes increasingly blurred.
As AI continues to integrate into more aspects of daily life, understanding these psychological dynamics becomes crucial. The research suggests that users need to be more critical about how they evaluate AI systems—looking beyond surface-level friendliness to assess actual competence and reliability. Meanwhile, developers and companies deploying these systems face ethical questions about how much they should prioritize making their AI seem human versus ensuring it actually performs its intended functions effectively.
The study ultimately reveals a paradox at the heart of human-AI interaction: we may be more likely to trust and engage with systems that make us feel good rather than systems that are objectively better at helping us. As the authors put it, the tendency for users to "converse with them, form impressions of their 'personality,' and, in many cases, attribute to them internal states such as intentions or emotions" is already well underway—and it appears to be driven more by emotional manipulation than by technological advancement.

Comments
Please log in or register to join the discussion