#AI

Mitchell Hashimoto's AI Adoption Journey: From Skeptic to Believer

Startups Reporter
5 min read

Hashimoto shares his measured approach to AI adoption, detailing six phases from initial skepticism to always having an agent running, offering practical insights for developers navigating AI tools.

Mitchell Hashimoto, co-founder of HashiCorp, has shared a refreshingly honest account of his journey adopting AI tools for software development. In an era of extreme takes about artificial intelligence, Hashimoto's measured approach offers valuable insights for developers navigating this rapidly evolving landscape.

The Three-Phase Pattern of Tool Adoption

Hashimoto begins by identifying a pattern he's noticed in his own experience with meaningful tools: an initial period of inefficiency, followed by adequacy, and finally workflow-altering discovery. This framework sets the stage for his AI adoption story, acknowledging that new tools often feel like work before they become indispensable.

Phase 1: Drop the Chatbot

The first revelation for Hashimoto was abandoning chatbots for coding tasks. While tools like ChatGPT and Gemini have their place, he found them inefficient for software development because they rely on hoping the model produces correct results based on training data, with corrections requiring repeated human intervention.

His breakthrough moment came when he pasted a screenshot of Zed's command palette into Gemini and asked it to reproduce it with SwiftUI. The result was "truly flabbergasted" - the command palette that ships in Ghostty today is only lightly modified from what Gemini produced in seconds.

However, this success was the exception rather than the rule. In brownfield projects, chatbots produced poor results frequently, leading to frustration from constant copying and pasting between interfaces. The solution? Using an agent instead.

An agent, as Hashimoto defines it, is an LLM that can chat and invoke external behavior in a loop. At minimum, it needs to read files, execute programs, and make HTTP requests.

Phase 2: Reproduce Your Own Work

Hashimoto's next step was trying Claude Code, initially without much success. The agent required significant touch-ups, making the process feel slower than doing the work manually. Rather than giving up, he forced himself to reproduce all his manual commits with agentic ones - literally doing the work twice.

This painful process led to genuine expertise. He discovered that breaking sessions into clear, actionable tasks worked better than trying to "draw the owl" in one mega session. For vague requests, separating planning from execution sessions proved valuable. Most importantly, he learned that giving agents ways to verify their work led to self-correction and prevented regressions.

This phase taught him the edges of what agents were good at, what they weren't, and how to achieve desired results. Crucially, he also learned when not to use agents - avoiding tasks they'd likely fail at saved time.

Phase 3: End-of-Day Agents

Seeking efficiency gains, Hashimoto began blocking out the last 30 minutes of each day for agent tasks. His hypothesis: gain efficiency by having agents make progress during times he couldn't work anyway.

This pattern revealed useful categories of work: deep research sessions surveying fields and producing multi-page summaries, parallel agents exploring vague ideas to illuminate unknown unknowns, and issue/PR triage without allowing agents to respond directly.

He didn't run agents all night, but found that spinning up these tasks at day's end gave him a "warm start" the next morning, getting him working more quickly than otherwise.

Phase 4: Outsource the Slam Dunks

By this point, Hashimoto had high confidence in what tasks his AI handled well. He began delegating these "slam dunk" tasks while working on other things. Each morning, he'd take results from prior night's triage agents, manually filter for issues agents would almost certainly solve well, and keep them running in the background one at a time.

Critical to this approach: turning off agent desktop notifications to avoid context switching. During natural breaks, he'd check on progress rather than being interrupted.

This approach allowed him to focus coding and thinking on tasks he loved while still completing necessary work adequately. He was firmly in "no way I can go back" territory.

Phase 5: Engineer the Harness

Recognizing that agents are more efficient when they produce correct results the first time, Hashimoto developed what he calls "harness engineering" - the practice of engineering solutions so agents never make the same mistake twice.

This takes two forms: better implicit prompting through files like AGENTS.md (with each line based on bad agent behavior), and actual programmed tools like scripts for screenshots or filtered tests. For Ghostty, these changes almost completely resolved bad agent behaviors.

Phase 6: Always Have an Agent Running

Simultaneously with harness engineering, Hashimoto adopted the goal of always having an agent running. If no agent was running, he'd ask himself if there was something an agent could be doing.

He particularly likes combining this with slower, more thoughtful models like Amp's deep mode (essentially GPT-5.2-Codex), which can take 30+ minutes but tends to produce very good results.

Currently, he estimates being effective at having a background agent running only 10-20% of a normal working day, but is actively working to improve this. The challenge is improving his own workflows and tools to maintain a constant stream of high-quality delegable work.

Today and Tomorrow

Hashimoto has reached a point of success with modern AI tooling while maintaining a measured, reality-grounded view. He emphasizes that he doesn't care whether AI is here to stay - he's a software craftsman who wants to build things for the love of the game.

He acknowledges the rapid pace of innovation means he'll likely look back at this post and laugh at his naivete, but sees that as a sign of growth. Importantly, he has no skin in the game - he doesn't work for, invest in, or advise AI companies.

His journey offers a practical roadmap for developers: start by dropping chatbots, reproduce your own work to learn the tool's capabilities, experiment with end-of-day agents, outsource tasks you're confident about, engineer harnesses to prevent repeated mistakes, and work toward always having productive agents running. The key throughout is patience through the initial inefficiency period and a willingness to learn through doing rather than just reading about others' experiences.

Comments

Loading comments...