The Self-Improving Loop: How Claude Code’s /insights Turns Usage Data Into Compounding Returns
109 sessions. 97 hours. 3 days. One report that changed how I work.
Earlier today I ran /insights in Claude Code for the first time. Im getting a lot of output from the $200 max plan!
1,429 MESSAGES - +9,885/-599 LINES- 126 FILES- 3 DAYS- 476.3 MSGS/DAY
It analyzed my recent sessions and produced a shareable HTML report — a detailed breakdown of what I’d been building, where I’d been wasting time, and what I should do differently.
That alone would be useful. But the real power isn’t the report. It’s what happens when you feed it back in.
What /insights Actually Shows You
The command analyzes your Claude Code sessions and generates a structured report covering five areas:
Project areas — what you’ve been working on, clustered by theme
Interaction style — how you use the tool (are you a delegator? a micromanager? somewhere in between?)
Impressive workflows — what’s working well and worth doubling down on
Friction analysis — where things went wrong, categorized by root cause
Suggestions — concrete CLAUDE.md additions, features to try, and usage pattern improvements
Here’s my actual report — sanitized of sensitive details but otherwise exactly what the tool produced.
My Numbers Were Humbling
The report revealed that across 109 sessions, I’d been running Claude Code as what it called “a massively parallel operations engine.” That tracks — I was migrating 17 GitHub Pages sites to Cloudflare, deploying WhatsApp drip campaigns to 36 students, building email workers, and auditing documentation across multiple course repositories. All in three days.
But the friction analysis was where the real value lived.
The top finding: 9 out of my friction events were “wrong approach” — Claude diving into my local filesystem when I asked a general knowledge question. I asked it to explain Cloudflare services. Three separate times, across three separate sessions, it grep’d my repos for Cloudflare config files instead of just… answering the question.
I’d been annoyed by this in the moment but hadn’t connected the dots. The report did.
The second finding: My observer/memory agent sessions were producing almost nothing useful. I’d been spawning them for nearly every session — a habit I’d picked up from the multi-agent orchestration pattern. But most of them captured minimal context or, worse, hallucinated summaries before any real work had happened. The report’s recommendation was blunt: “Reserve observer agents for sessions expected to last 30+ minutes with complex multi-step work.”
The third finding: Infrastructure deployments kept hitting mid-stream pivots. Wrong SSL config. Platform limitations discovered too late. The report suggested a one-line fix: have Claude state its plan and check constraints before executing. The Feedback Loop
The Feedback Loop
Here’s where it gets interesting. The report includes a section called “CLAUDE.md additions” — specific lines you can paste into your configuration file. These aren’t generic tips. They’re derived from your actual failure patterns.
One of mine:
“When asked a general knowledge question (e.g., ‘explain Cloudflare’, ‘what is X’), answer directly from knowledge first. Do NOT search the local codebase unless the user specifically asks about local usage.”
That’s a rule generated from my data, targeting my specific friction point. When I add it to CLAUDE.md, every future session benefits. The mistake that burned me three times won’t happen a fourth.
This is the loop:
Work with Claude → /insights analyzes sessions → Report identifies friction
→ You add fixes to CLAUDE.md → Claude reads them next session → Less friction
→ /insights runs again → Finds new, subtler issues → RepeatEach cycle raises the floor. The first round catches the obvious stuff — the Cloudflare-grep mistake, the pointless observer agents. The second round catches subtler patterns. The third round starts optimizing things you didn’t know were slow.
This is compound engineering applied to your own tools. “Use it before you improve it” — the rough prototype survives contact with reality and teaches you where to refine. The insights report is the mechanism that surfaces what reality is trying to tell you.
The Mental Model: Your AI Gets a Performance Review
Think of it this way: CLAUDE.md is your agent’s onboarding document. It tells Claude how to behave in your projects — what commands to run, what conventions to follow, what to avoid.
But onboarding docs go stale. They capture what you thought would matter, not what actually matters after weeks of real work.
/insights is the performance review. It looks at what actually happened — the wrong approaches, the wasted effort, the workflows that clicked — and generates updated onboarding instructions based on evidence.
And unlike a human performance review, you can paste the output directly into the system and have it take effect immediately.
The even more powerful version: feed the entire insights report back to Claude in a new session and ask it to rewrite your CLAUDE.md. It has the context of what went wrong, the suggested fixes, and can integrate them into your existing configuration while respecting what’s already there.
claude -p "Here's my /insights report: [paste report].
Update my CLAUDE.md to incorporate the suggested improvements.
Don't remove existing rules — merge the new ones in."Now your AI agent is improving its own instructions based on its own performance data. That’s not science fiction. That’s a Tuesday afternoon.What I Changed
What I Changed
After running /insights, I made three changes:
1. Added the knowledge-question rule to CLAUDE.md. No more filesystem searches when I ask “what is a Cloudflare Worker?”
2. Stopped spawning observer agents by default. I now only use them for sessions I expect to run 30+ minutes with complex multi-step work. This alone probably saves 20% of my session overhead.
3. Added a pre-deployment constraint check. Before any infrastructure work, Claude now states its plan and lists potential gotchas. The SSL/DNS pivot that burned an hour? It would have been caught in 30 seconds.
These aren’t revolutionary changes. They’re small, evidence-based adjustments. But they compound. Every session is slightly less wasteful than the last. Over 100+ sessions a week, that adds up to hours reclaimed.
The Broader Pattern
This loop — work, reflect, adjust, repeat — isn’t new. Athletes watch game film. Musicians record practice sessions. Writers edit their drafts.
What’s new is that the reflection is automated, the adjustments are machine-readable, and the improvements take effect immediately. There’s no willpower required, no habit to build. You paste a line into a config file and the behavior changes.
If you’re using Claude Code (or any AI coding assistant with configurable instructions), here’s the minimum viable version of this loop:
Work normally for a week. Don’t optimize prematurely.
Run
/insights(or manually review your session history).Identify your top 3 friction patterns. What mistakes keep repeating?
Add rules to your config. Be specific. “Don’t search files for knowledge questions” beats “be smarter.”
Wait another week. See if the friction drops.
Repeat.
The goal isn’t perfection. It’s a ratchet — each cycle clicks forward, and you never slip back.
Try It Yourself
If you’re on Claude Code, run /insights and see what surfaces. The report is a standalone HTML file you can share, archive, or feed back into your next session.
My full report is here if you want to see what the output looks like in practice. Fair warning: 97 hours of AI collaboration produces a lot of data.
The most valuable part isn’t the pretty charts or the project summaries. It’s the friction analysis — the honest accounting of where your AI partnership is leaving value on the table. That’s where the compounding starts.
This article was written collaboratively with Claude Code — the same tool being analyzed. Yes, the recursion is intentional.


