A Fortune survey shows CEOs reporting no measurable productivity gains from AI, but individual users are finding real benefits through targeted, personalized implementations.
A recent Fortune survey has thousands of CEOs admitting that AI has had no measurable impact on employment or productivity in their organizations. The results are being treated as vindication by skeptics and a crisis by vendors. But I read it and thought: these people are using AI wrong.
I use AI tools every day. Claude helps me write code. OpenClaw handles the kind of loose, conversational thinking I used to do on paper or in my head. Granola transcribes my meetings and a plugin I built pipes the notes straight into Obsidian. My email gets triaged before I look at it. Research gets compiled in minutes instead of hours. This stuff has genuinely changed how I work, and I don't think I could go back.
The CEO survey doesn't prove AI is failing. It proves that most organizations have no idea how to deploy it.
What actually changed
The gains aren't where the enterprise pitch decks said they'd be. Nobody handed me an AI tool that "transformed my workflow" in one go. What happened was slower and more specific: a dozen small frictions disappeared, and the cumulative effect was significant.
Meeting notes are the obvious one. Before Granola, I'd either scribble while half-listening or pay attention and try to reconstruct things afterwards from memory. Both were bad. Now the transcript happens in the background, a summary lands in my Obsidian vault automatically, and I can actually be present in the conversation. That's 20 minutes a day I got back, every day, without thinking about it.
Code generation changed my relationship with side projects entirely. I've shipped things this year that I simply wouldn't have started before: small tools, automations, scripts that solve a specific problem in an afternoon instead of a weekend. The AI doesn't write production-quality code on its own, but it gets me from "I know what I want" to "I have something running" in minutes instead of hours. That speed difference matters. It's the difference between "I'll build that someday" and actually building it.
Summarizing long documents, compiling research, triaging email: none of these are exciting. But they used to eat real time. Now they don't. The compound effect of reclaiming 30 or 40 minutes across a day is that my actual focus hours go further.
Why the survey got it wrong
The CEO survey is measuring organizational productivity, which is a completely different thing from individual productivity. Most companies deployed AI by buying enterprise licenses and hoping for the best. Copilot seats for every developer. ChatGPT access for every department. No training, no workflow integration, no clarity on what problems the tools were supposed to solve.
That's not an AI failure. That's a deployment failure.
It's a silly analogy, but you wouldn't buy everyone in the company a piano and then wonder why not everyone is a musician a month later. But that's essentially what happened with AI in most organizations, and it hopefully illustrates the point.
The productivity gains I've found came from figuring out, through months of trial and error, exactly where AI fits into my specific workflow. Not the generic "write me an email" stuff. The narrow, targeted things: transcription, code scaffolding, document summarization, research triage. Each one required experimentation to get right. Most people in most companies haven't done that work, and their employers aren't helping them do it.
There's also a measurement problem. My 20 minutes saved on meeting notes doesn't show up in a quarterly report. The side project I shipped in a day instead of a week doesn't register as a productivity metric. The compounding effect of less friction across dozens of small tasks is invisible to anyone looking at spreadsheets.
CEOs are looking for step-change improvements because that's what they were sold. The actual gains are granular and personal, which makes them hard to count and easy to dismiss.
The uncomfortable bit
None of this is free. Every AI tool that makes me more productive does so by ingesting my work. My meeting transcripts, my code, my half-formed ideas, my entire stream of consciousness on a given day: all of it flows through systems I don't own and can't audit.
I've spent the past year moving away from surveillance platforms. I replaced Google Photos with Ente, Gmail with Migadu, WhatsApp with Signal. I run my own XMPP server. I self-host my password manager. And yet I willingly feed more context into AI tools each day than Google ever passively collected from me.
It's a contradiction I haven't resolved. The productivity gains are real enough that I'm not willing to give them up, but the privacy cost is real too, and I notice it.
For companies putting their entire workforce's output through third-party AI, the data governance implications are enormous. Most organizations haven't thought about this seriously, which is another reason the CEO survey results look the way they do: they adopted the tools without understanding what they were trading.
I've settled into an uneasy position: AI for work where the productivity gain justifies the privacy cost, strict boundaries everywhere else. It's not philosophically clean. It's just honest.
The real gap
The gap isn't between AI's potential and its capability. The tools are good enough. The gap is between having access to AI and knowing how to use it well. That's an individual skill, built through experimentation, and it doesn't scale the way enterprise software purchases do.
I'll keep using these tools. They've made me measurably more productive in ways I can point to: time saved, projects shipped, focus protected. The CEOs in that survey aren't wrong about what they're seeing in their organizations. They're just wrong about what it means.
AI hasn't failed. Most companies just haven't figured it out yet.

Comments
Please log in or register to join the discussion