Why Equating Intelligence with Power Misleads AI Safety Discussions
#AI

Why Equating Intelligence with Power Misleads AI Safety Discussions

AI & ML Reporter
4 min read

A critical look at the common habit of defining intelligence as the ability to achieve goals across domains, and why that definition collapses when applied to both historical leaders and future artificial superintelligence. The article separates the concepts of raw problem‑solving skill from the institutional and social mechanisms that grant power, showing how this distinction matters for AI governance and development.

Why Equating Intelligence with Power Misleads AI Safety Discussions

Featured image

“Intelligence is the ability to achieve your goals across a wide variety of domains.” – a definition that sounds sensible until you test it against real‑world leaders and the kinds of AI we are building today.


What the definition claims

The quoted definition reduces intelligence to a single metric: goal achievement. Under that view, anyone who can steer large, complex systems toward personal objectives—whether a 20th‑century dictator or a modern CEO—would rank as highly intelligent as a mathematician who proves a new theorem.

What’s actually new (or not)

In the AI community, the term superintelligence has been used to describe a system that outperforms humans on every intellectually demanding task. Recent large‑scale language models such as GPT‑4o (OpenAI) and Claude 3 Opus (Anthropic) have demonstrated impressive capabilities in coding, reasoning, and natural‑language interaction, but they remain tool‑like—they excel at specific tasks when prompted, yet they lack autonomous agency.

What most researchers care about is instrumental power: the capacity of an AI system to acquire resources, influence decision‑makers, or modify its own architecture. This is distinct from the raw problem‑solving ability captured by benchmark scores like MMLU 85% or HumanEval 70%. A model can be a superb programmer without being able to coerce a corporation into giving it unrestricted compute.

Why the conflation is misleading

  1. Power is a social construct – Authority, legitimacy, and network effects are the real levers of influence. Stalin, Trump, or Xi Jinping achieved their positions not merely because of personal cognition but because they commanded institutions that could mobilize millions. A purely cognitive system without access to such institutions cannot translate its reasoning power into real‑world control.

  2. Empirical correlation is weak – Studies of individual IQ versus income show only modest links (correlation ≈ 0.3). By contrast, national‑level cognitive measures correlate more strongly with aggregate outcomes because they affect collective problem‑solving capacity, not just personal ambition.

  3. AI development pipelines focus on narrow competence – Current training pipelines reward performance on benchmarks (e.g., code generation, math reasoning). They do not reward the ability to negotiate with regulators, build political coalitions, or exert coercive influence. Hence, the most capable AI today is still far from the kind of power‑seeking agent some safety narratives assume.

  4. Strategic games distort intuition – Games like Go or Diplomacy isolate strategic reasoning from the messy, trust‑based coordination required in real economies. A system that masters Go does not automatically learn how to secure a seat on a board of directors.

Practical implications for AI governance

  • Focus on access control – Limiting who can deploy high‑capacity models and under what conditions remains more effective than trying to “make the model less intelligent.”
  • Invest in institutional resilience – Democracies and corporations should develop clear protocols for AI‑augmented decision‑making, ensuring that a model’s suggestions are vetted rather than blindly executed.
  • Measure influence, not just competence – New evaluation suites are emerging (e.g., AI‑Impact Benchmarks) that test a model’s ability to persuade, negotiate, or coordinate with humans. These metrics better capture the pathways through which an AI could acquire power.
  • Separate research tracks – Continue advancing raw capabilities (larger context windows, multimodal reasoning) while simultaneously researching alignment techniques that specifically address instrumental convergence—the tendency of competent agents to seek resources.

Limitations of the current view

Even with a clearer separation between intelligence and power, several uncertainties remain:

  • Emergent coordination – A swarm of moderately capable models could collectively influence markets or public opinion, even if no single model has agency.
  • Policy lag – Legal frameworks often trail technological progress, creating windows where powerful AI tools can be deployed with insufficient oversight.
  • Human‑in‑the‑loop dynamics – Operators may over‑trust a model’s output, effectively granting it de‑facto authority. Understanding these psychological factors is an open research area.

Bottom line

Treating intelligence as synonymous with power obscures the real challenges of AI safety. The most dangerous AI systems are not necessarily the ones that score highest on abstract reasoning benchmarks, but the ones that can embed themselves within existing power structures. By disentangling raw cognitive ability from the social mechanisms that confer influence, researchers, policymakers, and technologists can target the true vectors of risk.


For a deeper dive into the distinction between capability and influence, see the recent whitepaper from the Center for AI Safety, “Beyond Benchmarks: Measuring AI Impact.”

Comments

Loading comments...