Recurse Center's AI Manifesto: How Programmers Can Navigate the LLM Revolution Without Losing Their Edge
Share this article
As AI reshapes software development, programming communities face existential questions: Do large language models (LLMs) enhance or undermine skill development? How should technical education adapt? For Recurse Center (RC)—a unique retreat where programmers learn through self-directed projects and peer collaboration—these aren't theoretical debates. They're operational realities. In a candid new blog post, co-founder Nick Bergson-Shilcock details how RC crafted its AI philosophy by tapping its greatest asset: a diverse, thoughtful alumni network.
The Advisory Group: Mining Wisdom from 3,000 Programmers
RC assembled an informal AI advisory group representing varied demographics, seniority levels, industries, and—critically—divergent views on AI. Alumni included LLM skeptics, enthusiasts, and those in between, from early-career developers to veterans with decades of experience. Their insights revealed stark contrasts:
- Wildly differing assessments of current LLM utility: One alum described agents like Claude Code as transformative, handling pull requests from non-technical colleagues. Another dismissed them as "extremely confident and almost always wrong." Factors like programming domain (web apps vs. systems-level C), codebase size, and recency of LLM experimentation heavily influenced perspectives.
- Learning modes over absolutes: Many advocated context-specific use, like switching between "shipping mode" (LLM-heavy) and "learning mode" (LLM-free). One alum’s analogy resonated widely:
> "LLMs are like e-bikes. If your goal is speed, they help. If your goal is building strength, they won’t. Relying on them robs you of deep engagement essential for growth." - Social ripple effects: While some found LLMs useful for cross-stack pairing, others feared they might reduce community interaction. As one alum warned, "Asking an LLM sends energy into a void instead of the community."
Core Principles: Volition, Rigor, and Generosity Amidst AI Chaos
Amidst the noise, RC distilled three "self-directives" from its unschooling-inspired ethos:
- Work at the edge of your abilities: Growth happens at the boundary of knowledge. LLMs can expand this edge but risk creating gaps between what you produce and what you understand. Rigor—verifying outputs, dissecting logic—becomes non-negotiable.
- Build volitional muscles: Your agency to choose goals and paths defines meaningful work. LLMs excel at answers but fail at discerning what matters. Bergson-Shilcock writes: "Use AI to amplify ambition, not abdicate agency."
- Learn generously: Community is RC's bedrock. Transparency about AI experiments (successes and failures) and openness to differing viewpoints foster collective growth. As Bergson-Shilcock notes, "Asking questions helps the person you’re asking, too."
Nick Bergson-Shilcock, co-founder of Recurse Center.
The Unshakeable Verdict: Mental Models Trump Generated Code
Across all interviews, one truth emerged: Understanding systems deeply remains paramount. An alum using Claude daily stressed that mental models of OS internals or network protocols are "as valuable as ever." A systems programmer added:
"Fluidity across abstraction layers defines great programmers. LLMs can’t do this yet—and may never replicate human discernment."
This echoes John Holt, whose unschooling philosophy underpins RC: "We cannot give others our mental structures; they must build their own." In an AI-saturated world, RC’s stance is a clarion call: Tools change, but the fundamentals of learning—curiosity, critical engagement, and human connection—endure. For developers navigating the LLM upheaval, prioritizing these isn’t just educational; it’s professional survival.
Source: Developing our position on AI by Nick Bergson-Shilcock (Recurse Center Blog, July 2025).