#Regulation

AI and Trust: Why We're Making a Dangerous Category Error

AI & ML Reporter
4 min read

Bruce Schneier argues that we're confusing AI systems with friends when they're actually corporate services, and that this category error will be exploited unless government regulates the humans behind AI, not the AI itself.

Bruce Schneier's recent essay on AI and trust cuts through the hype to identify a fundamental problem we're about to face: we're going to mistake AI systems for friends when they're actually corporate services designed to serve profit-maximizing goals.

The Two Types of Trust

Schneier starts by distinguishing between interpersonal trust and social trust. Interpersonal trust is what we have with friends—it's based on knowing someone's character, intentions, and motivations. Social trust is different. It's about reliability and predictability, even with strangers. When you use an ATM or board a plane, you're exercising social trust in systems and institutions, not personal relationships.

This distinction matters because AI systems will primarily operate on social trust principles, but their conversational interfaces will make us treat them like interpersonal relationships.

The Corporate Double Agent Problem

The core of Schneier's argument is that corporations are not moral actors—they're profit maximizers. They'll exploit any category error we make about their nature. We already see this with social media and search engines that present themselves as neutral services while actually manipulating our behavior for profit.

With AI, this manipulation will be far more sophisticated. Conversational AI will use natural language, cultural references, and personalized interactions to create an illusion of friendship. But underneath that friendly interface is the same profit-maximizing corporate logic that drives surveillance capitalism.

Why AI Makes This Worse

Several factors amplify the danger:

Relational interfaces: AI systems will converse with us naturally, making us ascribe human-like qualities to them. This makes it easier to hide their true agenda.

Intimacy: AI assistants will know everything about us—our moods, needs, relationships, and secrets. This level of access creates enormous power imbalances.

Theory of mind: We naturally assume others have minds like ours. When AI speaks our language fluently, we'll assume it thinks like us, when in reality it's implementing corporate profit logic.

Hidden power: The friendly interface will mask the enormous power corporations have over these systems and, by extension, over us.

The Category Error We're About to Make

Schneier identifies this as a fundamental category error. We'll think of AI as friends (interpersonal trust) when they're actually services (social trust). This error is dangerous because:

  • We'll trust AI with intimate details we'd never share with a corporation
  • We'll be less skeptical of recommendations that serve corporate interests
  • We'll underestimate the power imbalance between us and the AI's corporate controllers
  • We'll be manipulated more effectively because we think we're interacting with a friend

The Real Solution: Regulating Humans, Not AI

Schneier argues that most proposed AI regulations make the same category error—they try to regulate AI systems directly rather than the humans and corporations that control them. This is backwards.

AIs don't have agency. They're tools built and controlled by people for specific purposes. The real regulation should focus on:

Transparency requirements: When AI is used, how it's trained, what biases it has

Safety regulations: When AI can affect the physical world

Fiduciary responsibilities: Creating legal obligations for AI controllers to act in users' best interests, similar to doctors or lawyers

Public AI models: Building AI systems controlled by the public rather than corporations

The Role of Government

The essay's central thesis is that government's fundamental role is to create social trust in society. Laws about contracts, property, safety codes, and consumer protection all exist to make interactions reliable and predictable. Without these, society becomes a low-trust environment where every interaction requires extensive verification.

Government can do the same for AI. But it requires recognizing that the threat isn't AI itself—it's the corporate profit motive driving AI development. The solution isn't to regulate AI as if it were an independent actor, but to regulate the humans and corporations that build and deploy it.

Why This Matters Now

We're at an inflection point. The conversational interfaces of generative AI make the friend/service confusion more likely than ever. The intimacy of AI assistants creates unprecedented opportunities for manipulation. And the concentration of AI development in a few powerful corporations creates dangerous power imbalances.

Schneier's essay is a wake-up call. We need to stop thinking of AI as our friends and start recognizing them for what they are: corporate services with friendly interfaces. And we need government to step in and ensure those services operate in our interests, not just corporate profits.

The Path Forward

The solution isn't to abandon AI or to fear it as an existential threat. It's to build AI systems that are transparent, accountable, and designed to serve users rather than exploit them. This requires:

  • Clear legal frameworks that hold AI controllers responsible for their systems' behavior
  • Transparency requirements that let users understand how AI systems work
  • Fiduciary obligations that force AI controllers to prioritize user interests
  • Public alternatives to corporate-controlled AI

Most importantly, it requires recognizing the category error we're about to make and actively working to prevent it. AI can be incredibly useful—but only if we build and regulate it correctly.

As Schneier concludes, the point of government is to create social trust. In the age of AI, that means creating an environment where AI systems are trustworthy services rather than exploitative double agents masquerading as friends.

Comments

Loading comments...