Anthropic's Ultra Plan Is a Live R&D Lab

Anthropic's Ultra Plan Is a Live R&D Lab

The Ultra plan is not a pricing tier. It's a research programme — and you're the subject.

I've been a Claude Code power user since it launched. Max 20x subscriber, spending roughly $200/month. And the more I think about what Anthropic is doing with these top-tier plans, the more convinced I am that this isn't just a revenue play. It's one of the most elegant design strategies I've seen from any AI company: remove the constraints, give the heaviest users unlimited runway, and watch what happens.

That is the Ultra plan hypothesis.


The source code told the story

In late March 2026, Anthropic's npm registry accidentally shipped version 2.1.88 of Claude Code with a source map file still attached — 512,000 lines of code across 1,900 files, suddenly readable. The community tore into it immediately.

What they found was not a product in maintenance mode. It was a product with a full R&D pipeline running underneath it.

Four features stood out:

KAIROS: An "always-on background agent" — a fully built but unshipped autonomous daemon mode. It appears over 150 times in the source code. Not a sketch. Not a feature flag on a prototype. A nearly complete system gated behind a compile-time flag, waiting.

AutoDream: A fire-and-forget task execution layer. You describe a complex multi-step problem, kick it off, and Claude Code works through it while you're away. Its companion logic "merges duplicate memories, eliminates contradictions, resolves speculations" — pruning memory to stay coherent across long-running sessions.

Coordinator Mode: A hierarchical multi-agent architecture where one Claude instance has explicit authority over multiple workers, with shared memory and structured approval gating.

Ultraplan: A subagent that offloads complex planning to a remote cloud container running Opus, giving it up to 30 minutes of uninterrupted compute to think through architecture.

These are not moonshots. They're the next version of Claude Code, and they're nearly ready.


So why haven't they shipped?

Because Anthropic is validating them first — and the Max and Ultra plan subscribers are the validation mechanism.

Think about what the Max 20x plan actually is: 20 times the usage of a standard Pro subscription. A 5-hour rolling window that resets multiple times daily. Weekly compute ceilings designed for sustained, intensive use. Priority access to the newest models. It's not designed for a casual user who writes a few prompts a day. It's designed for someone running Claude Code for hours at a stretch, pushing agentic workflows, hitting the edges of what the system can do.

And when you're hitting those edges, you're generating exactly the kind of signal Anthropic needs. What kinds of tasks do power users actually run? Where does the model drop the ball on long-horizon reasoning? When does multi-step autonomy go sideways? At what point does the user step in?

They can't simulate this in a lab. They can observe it at scale.


This is not a new strategy

When GitHub launched Copilot's early access, they weren't just generating revenue from the waitlist. They were watching how professional developers actually integrated AI assistance into their existing workflows before building Copilot Workspace around those observed patterns.

When Tesla shipped FSD beta to a subset of power users, it wasn't primarily about customer satisfaction. It was about capturing edge cases at scale that simulation could never reproduce.

The pattern is consistent: remove constraints from your most demanding users, observe what emerges, productise for the mainstream.

Anthropic is running the same playbook, but with higher stakes and better instrumentation.


What this means if you're a power user today

You are, whether you knew it or not, doing R&D. The way you use Claude Code right now is informing what Claude Code becomes for everyone else in 12 months.

That's not a criticism. It's actually a reason to lean in harder.

If you're on Max 20x and you're still using Claude Code like a smarter autocomplete, you're leaving the interesting part on the table. The edge of the tool is where the next version of the tool is being designed.

Run the long agents. Try the agentic workflows that feel like they might break. Push into the multi-session, multi-step work that you'd normally manage manually. Not because Anthropic needs you to be their QA team — but because that's where you'll find the 10x productivity gains before they're packaged and marketed to everyone.


One thing I keep thinking about

The leaked source code also mentioned Undercover Mode — a feature that lets Claude commit to public git repositories without leaving a signature or any sign it was active.

I don't know what to make of that. Maybe it's a capability built for specific enterprise partnerships. Maybe it's something else entirely. But it's a reminder that the roadmap visible from the outside is a fraction of what's actually being built.

The Ultra plan isn't just removing limits on what you can do. It may also be removing limits on what Anthropic can learn from watching you do it.

I think that's a fair trade. I just think it's worth knowing that's the deal.