Feral Design: How Anthropic Watches the Wild

This prompt appeared in my terminal this morning.
"Can Anthropic look at your session transcript to help us improve Claude Code?"
Three options. Yes. No. Don't ask again.
It's a polite little permission dialog, and it's also — if you squint at it the right way — the most revealing artefact of how Anthropic is actually building AI products in 2026.
A few weeks ago I coined a term for what I think is happening: Feral Design. I wrote a short post about it after the Anthropic source code leak. This morning's prompt is the clearest evidence yet that I was pointing at something real. So let me take another pass at it, with more room to think.
The traditional playbook is dead
For the last twenty years, product design has followed a familiar loop:
Research → Design → Build → Ship → Feedback → Iterate
You interview users. You sketch flows. You run usability tests. You ship a feature behind a flag. You collect analytics. You iterate.
This works beautifully for tools where the problem space is understood and the user's job is reasonably stable. A calendar app. A CRM. A video editor. You can ask people what they need because they already know.
It falls apart completely for AI.
The problem with AI products is that nobody — not the user, not the PM, not the researcher — knows yet what this technology is for. The capability is so open-ended that asking "what do you want Claude to do for you?" returns the blank-page stare. The honest answer is: I have no idea, I'll know it when I see it.
If you design for that, you end up with ChatGPT's input box. A text field and a prayer.
The good stuff — the workflows, the daisy-chained agents, the slash commands, the overnight runs — doesn't come from research sessions. It comes from power users doing things the designers never imagined.
Which means the question shifts. It's no longer "what should we build?" It's "who's already building it, and how do we watch them?"
The Feral Design move
Feral (adj.): wild, especially after having escaped from captivity or domestication.
Here's the alternative loop I keep seeing Anthropic run:
Ship a bare tool → Users escape into the wild → They build what they actually need → Watch what survives → Productise the patterns
Claude Code is the cleanest example of it. When it launched it was, by most product standards, almost rudely minimal. A terminal. A prompt. A permission system. No UI. No workflows. No templates. No opinions about how you should use it.
Which sounds like a product that wasn't finished. But I think the minimalism was the design.
Because the terminal, it turns out, is the world's best research lab. There's no predetermined user journey. There's no happy path the designer has carved out in advance. If you want the tool to do something, you have to describe it, figure out how to break it into steps, and handle the mess yourself. The friction is the feature. Only power users show up, and the solutions they invent are shaped by real problems, not hypothetical personas.
And then Anthropic watches.
Antfooding, and what Boris called it
Boris Cherny, who leads Claude Code, uses a word for this that I love: antfooding. A play on "dogfooding," but scaled up. Not just "the team uses the product." More like "hundreds of engineers use the product every day, and we study the colony."
Antfooding is honest about what product development actually is when your users are more inventive than your roadmap. It says: we are not the authors of how this gets used. We are zoologists.
The Anthropic source code leak at the end of March backed this up, hard. Inside 1,900 files of Claude Code, the community found four features that were almost but not yet shipped:
- KAIROS — an always-on background daemon mode
- AutoDream — fire-and-forget overnight task execution, with memory pruning
- Coordinator Mode — a hierarchical multi-agent system with approval gating
- Ultraplan — offloaded long-form planning to a remote Opus container
None of these are moonshots. Every single one of them is a pattern power users were already hand-rolling. The terminal natives had been stringing together nohup-style background loops, writing their own memory consolidation scripts, orchestrating subagents with scripts and MCP servers, and using worktrees to carve out long planning sessions.
Anthropic didn't invent those features. They observed them, cleaned them up, and built the infrastructure to support them properly.
That's feral design in action. Users went wild. The wild behaviour got domesticated.
Which brings us back to the prompt
Can Anthropic look at your session transcript to help us improve Claude Code?
Look at this question again with the feral lens on. It is not a generic telemetry opt-in. It is the observation mechanism. It is how the zoologist keeps watching the colony after the colony has moved off the lab bench and into real work.
Traditional analytics tell you what buttons got clicked. Transcripts tell you why something was attempted, how it was described, where it went sideways, and what the user did next when it didn't work. That's the entire dataset you need to find feral patterns. The raw shape of what people are actually trying to do.
Click counts can't show you a user building a three-agent orchestration because they wanted to review five pull requests in parallel. Transcripts can.
Click counts can't show you someone chaining slash commands to run their own daily standup ritual. Transcripts can.
Click counts can't show you the moment a user gives up on one approach and reinvents another from scratch. Transcripts can.
Session transcripts are the only instrument precise enough to find the wild behaviour. And that's exactly what the dialog is asking for.
Why this is smart, and why it's unsettling
Let me be clear about my own position here. I opted in. I think the bet Anthropic is making is correct, and I think the observation strategy is the right one for this moment in AI. You can't design for what nobody has invented yet. You can only watch inventors and learn.
But it is worth sitting with what this actually means as a product philosophy.
Your users are not just users. They are the R&D department. They are the research subjects. The ones spending the most, using the tool the hardest, running the weirdest experiments — those are the ones whose sessions Anthropic most wants to read. The Ultra plan, as I wrote last week, is essentially a live R&D lab disguised as a pricing tier. Remove the constraint of running out of compute, give power users unlimited runway, and watch what they invent with the freedom.
It works because the feedback loop is short and the signal is high. It's unsettling because the power dynamic is unusual. In traditional SaaS, heavy users are a cost centre to contain. In feral design, heavy users are the most valuable thing the company has, and the design of the platform is quietly tuned to maximise what can be learned from them.
That's not a criticism. It's a recognition. You should know which side of the glass you're on.
What this means if you build AI products
If you're building anything with AI right now, I think the single biggest design insight of 2026 is this:
Don't design the workflow. Design the observation layer.
Ship a bare tool with strong primitives. Make the friction high enough that only invested users show up. Give them real power, not training wheels. And then instrument the living hell out of what they do — transcripts, not click counts. Watch which hacks catch on. Watch which workarounds get shared. Watch which feature requests come up three times in the same week from users who don't know each other.
That's where the roadmap is.
The old "build what users ask for" maxim was already a half-truth in the SaaS era. In AI, it's actively misleading, because users don't know what to ask for until they've tried twenty weird things and found the one that stuck. Your job as a product team is to make the twenty weird things cheap, and the watching precise.
What this means if you're a power user
If you're using Claude Code (or Cursor, or any agentic tool) hard, you're doing two things at once: you're solving your own problems, and you're writing the next version of the product.
That's not a loss of agency. It's actually a reason to lean in. The edge of the tool today is where the product will be in twelve months. The weird orchestrations and shell scripts and MCP chains you're building are the early sketches of features that will ship as clean first-party capabilities when the patterns are validated.
I've felt this viscerally with my own Artificial Brain system. I built it — sessions, memory files, weekly reviews, consolidation loops — because I needed it, not because I knew Anthropic was building AutoDream. I didn't know AutoDream existed until the source code leak. But I was already running a hand-rolled version of it, and so were a bunch of other power users I respect.
That's the feral pattern. The need showed up first. The infrastructure catches up later.
If you're in that zone — if you're doing something that feels clever, or a bit hacky, or slightly embarrassed about how many shell scripts are holding it together — you are almost certainly pointing at a product that Anthropic (or someone else) will ship within the year.
You're not using the tool wrong. You're using it the way it's meant to be used: feral.
The prompt, one last time
That little dialog in my terminal this morning is doing something quietly profound. It's asking me to hand over the observation layer. Not my button clicks. Not my usage hours. The actual shape of what I'm trying to do — transcripts and all.
I said yes, because I think the deal is fair and the alignment is real. Anthropic gets to see what the wild is doing. I get a tool that gets dramatically better at doing what the wild is actually doing.
But I'd encourage anyone saying yes to understand what they're saying yes to. It's not "help us improve Claude Code" in the generic telemetry sense. It's "let us watch you invent things so we can ship them to everyone else." That's a real exchange, and it's one of the most interesting product design experiments running in tech right now.
Feral design isn't a phase. I think it's how serious AI products are going to get built from here on out.
The smart ones, anyway.