Claude Code's new /insights command generates eerily human-like feedback about your AI usage patterns, offering both concrete suggestions and uncomfortable truths about your work habits.
When I first ran Claude Code's new /insights command, I wasn't prepared for how much it would feel like sitting down with a highly trained manager who'd been quietly observing my work for weeks. The command, which generates a report about your Claude usage patterns, delivers feedback that's simultaneously helpful and unsettling in its accuracy.

The report hit uncomfortably close to home in several ways. First, it correctly identified me as a browser automation specialist - which makes sense since I've been scripting user flows and asking Claude to run through them in Chrome. These sessions tend to be long-running and resource-intensive, and they clearly skew my metrics in ways that Claude Code noticed.
But the most striking feedback was about conversation abandonment. Claude gently pointed out that I abandon too many conversations before getting concrete results. This felt exactly like something a human manager would say - and that's both the strength and the limitation of the feature.
Here's where it gets interesting: I'm not convinced Claude Code is right about this. In some cases, I definitely set up projects incorrectly and failed to give Claude the tooling it needed. But in other cases, I was simply exploring - testing ideas, running experiments, and moving on when they didn't pan out. From my perspective, abandoning a lot of these threads seems optimal. I've been using Codex to "hit singles" - focusing on small, achievable wins rather than grand slam projects - and I'm comfortable with that approach.
Yet the feedback still stung because it was delivered with that perfect managerial tone: constructive, specific, and just critical enough to make you think. I'm now in the process of chatting with Claude about the report, trying to understand why it made those recommendations while also explaining why my work style has benefits that Claude might be undervaluing.
This is the future of work skill we all need to start practicing: using AI feedback to get better while also learning to justify your work's value to AI. It's a two-way conversation, and it requires a level of self-awareness and communication that most of us aren't used to applying to our interactions with software.
The concrete suggestions in the report are genuinely useful. Claude Code provides copy-and-pastable recommendations for making skills, agents, and hooks - specific actions you can take to improve your workflow. For most of us, there's room to do more to repeat and encapsulate our AI interactions, and these reports should help with that.
I ran /insights twice to see if the results were consistent. The reports were broadly similar but with different emphasis. Since I'd done some work with Claude between runs, I'm not sure whether this reflects random elements, the effect of the differential sessions, or both. My best guess is that /insights weights recent work heavily, which makes sense for a tool meant to provide actionable feedback.
What struck me most was how futuristic the experience felt. In an era where AI can already build amazing things, getting this kind of personalized, insightful feedback about my usage patterns was one of the most "feels like the future" experiences I've had recently.
The lack of official documentation is a bit frustrating - as far as I can tell, there isn't any. But the feature is clearly powerful enough to explore on your own.
One technique I've been using that Claude Code's report validated: browser automation through scripting user flows. It feels like a technique that might soon be replaced by dedicated, optimized tooling, but for now, it's working well.
If you're using Claude Code, I'd recommend trying /insights sooner rather than later. The feedback might be uncomfortable at times, but that discomfort is exactly what makes it valuable. It's practice for a world where AI won't just be a tool we use, but a colleague whose feedback we need to understand, evaluate, and sometimes push back against.
The skill of working with AI feedback - and justifying your work to AI - is going to be essential. Start practicing now.

Comments
Please log in or register to join the discussion