#AI

The AI Power Struggle: Why Closed Source Models Threaten Our Digital Future

Startups Reporter
4 min read

A critical examination of how closed AI systems create dangerous concentrations of power, and why open technical orders are essential for preserving human agency in the age of artificial intelligence.

The debate over artificial intelligence isn't just about technology—it's about who controls the future of human cognition itself. As AI systems grow more powerful, we're witnessing a fundamental tension between two competing visions: one where a handful of corporations and governments become the gatekeepers of machine intelligence, and another where these tools remain open, auditable, and under the control of ordinary people.

The Honorable Intentions Behind AI Development

Many of the brightest minds in artificial intelligence didn't enter the field seeking power or control. They were drawn by genuine intellectual curiosity, the desire to solve hard problems, and the belief that these systems could become historically significant. Some wanted to create broadly useful tools, others to reduce existential risks, and many simply wanted to work on the most consequential technology of their generation.

These motivations are entirely honorable. The issue isn't that everyone working in frontier AI labs is corrupt or malicious. Rather, it's that the institutional structures themselves create predictable pressures toward concentration of power.

The Structural Problem of Concentrated AI

When AI development happens within closed, proprietary systems tied to major cloud providers and state interests, several forms of concentration naturally emerge:

  • Compute concentration: Access to the massive computational resources needed for cutting-edge AI becomes limited to those with enormous capital
  • Talent concentration: The most skilled researchers cluster in a few well-funded organizations
  • Deployment concentration: Control over how AI systems are deployed and to whom
  • Legitimacy concentration: These entities increasingly claim the authority to make decisions about AI's future

Even well-intentioned people working within these institutions can find themselves contributing to a world where a few firms become the custodians of machine intelligence. This isn't a personal failure—it's a structural inevitability of the current model.

The Custodial Intelligence Model

The emerging default scenario involves a small number of labs, backed by major cloud providers and aligned with state interests, making unilateral decisions about:

  • Which AI systems may be built
  • Who may access them
  • Which capabilities can be released
  • What values get embedded in the systems
  • What forms of dependency the public must accept

This model is often framed as "responsible" or "safe," but in practice it creates a form of custodial intelligence. Extraordinary cognitive power exists, but ordinary people never truly possess it. They become dependent on the goodwill and competence of a small group of gatekeepers.

The Alternative: A Free Technical Order

The alternative isn't recklessness or abandoning safety concerns. Instead, it's building toward what might be called a "free technical order"—a system that takes safety seriously while rejecting permanent dependency as the answer to every problem.

A free technical order would:

  • Embrace multiple centers of capability rather than a single canonical stack
  • Treat openness, auditability, and local control as political goods, not just engineering preferences
  • Assume no entity has earned the right to unilaterally curate the future of machine intelligence
  • Build toward commodity hardware and cheap compute rather than artificial scarcity
  • Ensure real rights to inspect, fine-tune, fork, and refuse
  • Design institutions to prevent permanent concentration of cognitive infrastructure

The Political Nature of AI Architecture

This isn't merely a technical debate about model architectures or training techniques. It's fundamentally about political economy and human agency. The question isn't whether advanced AI should exist—it will. The question is what sort of order it should inhabit.

Closed source AI systems create what amounts to a new form of feudalism in the digital age. A few lords control the means of cognitive production, while everyone else becomes a dependent vassal, granted access to powerful tools but never true ownership or control.

Moving Beyond the False Dichotomy

The choice isn't between safety and openness, or between responsibility and freedom. A free technical order can and must take safety seriously. The key is recognizing that permanent concentration of power is itself a form of risk—perhaps the greatest risk of all.

By building toward multiple model lineages, open tools where possible, local inference capabilities, and systems designed for auditability and contestability, we can create an AI ecosystem that preserves human agency while still addressing legitimate safety concerns.

The Path Forward

The tension many feel about AI's future isn't confusion—it's clarity about what's at stake. We're not just building tools; we're building the cognitive infrastructure that will shape human civilization for generations to come.

Those who care about preserving human agency, preventing dangerous concentrations of power, and ensuring that extraordinary cognitive tools remain accessible to ordinary people have a clear imperative: support and build toward a free technical order. This means backing open source initiatives, demanding transparency from AI companies, advocating for policies that prevent monopolistic control, and most importantly, continuing to develop and deploy AI systems that can be run locally, inspected, modified, and controlled by their users.

The future of AI shouldn't be decided by a handful of corporations and governments claiming to act in humanity's best interest. It should be built by a diverse ecosystem of developers, researchers, and users who understand that true safety comes not from concentrated control, but from distributed power and genuine human agency.

Comments

Loading comments...