Inside Boris Cherny's Claude Code Workflow: Scaling AI-Assisted Development
#AI

Inside Boris Cherny's Claude Code Workflow: Scaling AI-Assisted Development

Cloud Reporter
2 min read

Claude Code creator Boris Cherny reveals how Anthropic engineers leverage parallel sessions, automated verification, and collaborative documentation to maximize productivity.

Featured image

Boris Cherny, creator of Anthropic's Claude Code, recently detailed the development workflow his team uses internally, showcasing how strategic implementation of AI coding tools can compound productivity gains. The approach combines parallel execution environments, rigorous verification systems, and collaborative knowledge sharing to create what Cherny describes as a "compounding productivity loop."

Parallel Execution Architecture

Cherny runs 5-10 Claude Code sessions simultaneously—5 locally on his MacBook terminal and 5-10 via Anthropic's web interface. To prevent conflicts, each local session operates in isolated git checkouts rather than using branches or worktrees. Remote sessions leverage --teleport for context switching between environments. Despite optimization, Cherny notes 10-20% of sessions are abandoned when encountering edge cases beyond Claude's current capabilities.

Model Selection Strategy

Contrary to default assumptions, Cherny exclusively uses Claude's Opus 4.5 model despite its slower raw speed compared to Sonnet. He explains: "Opus delivers higher quality code with fewer iterations—its superior tool use capabilities and reliability make it faster overall for complex tasks." This preference highlights how teams should evaluate AI models on net productivity impact rather than superficial speed metrics.

Collaborative Knowledge Engineering

Each Anthropic team maintains a CLAUDE.md file documenting:

  • Common mistakes
  • Style conventions
  • Design patterns
  • PR templates

Engineers tag PRs with @.claude to automatically add new learnings, creating a continuously updated knowledge base (currently at 2.5k tokens). This institutional memory allows Claude to avoid repeating past errors and maintain consistent patterns across the codebase.

Verification-First Development

The core productivity multiplier comes from Cherny's verification framework:

  1. Planning Phase: Use Claude in plan mode to refine architecture before coding
  2. Auto-Editing: Switch to auto-accept mode for implementation
  3. Automated Checks: Run PostToolUse hooks for formatting (bun run format)
  4. End-to-End Validation: Test UI changes via Claude Chrome extension

"Claude tests every change in the actual UI, iterating until both functionality and UX meet standards," Cherny explains. This verification layer improves output quality by 2-3x according to internal metrics.

Security and Permission Controls

Unlike many developers, Cherny avoids --dangerously-skip-permissions except for sandboxed long-running tasks. Instead, he whitelists safe commands (build/test scripts) via /permissions configuration. This balances security with workflow efficiency by eliminating unnecessary permission prompts.

Strategic Implications

This workflow fundamentally changes engineering dynamics:

  • PR reviews focus on architecture rather than basic errors
  • Knowledge transfer happens continuously via CLAUDE.md
  • Verification overhead shifts left to the AI

As Cherny notes: "When engineers review PRs, the code is already production-ready." Teams adopting similar patterns report 30-50% reductions in development cycles while improving code consistency.

For implementation details, see Anthropic's Claude documentation and Cherny's original workflow thread.

Comments

Loading comments...