The Struggle is the Feature

The Struggle is the Feature

Anthropic just published a study that says AI coding assistants make developers worse. The headline spread fast. "AI cuts developer skills by 17%!" Cue the panic.

But here's the thing — this isn't surprising. It's obvious. And the headline misses the point entirely.

What the Study Actually Found

Anthropic ran a randomized trial with 52 junior developers learning a new programming library. Half used AI assistance, half coded manually.

The results: AI users scored 50% on comprehension tests. Manual coders hit 67%. That's a 17% gap — nearly two letter grades.

But the interesting part wasn't the gap. It was how people used the AI.

The AI group relied on it as a crutch. They accepted suggestions without understanding them. They built working code without building working knowledge. The artifact was correct. The learning was hollow.

This is the exact same dynamic I wrote about with UX research and AI-generated personas. The deliverable looks right. But the team that produced it didn't go through the struggle that makes the deliverable useful.

Why Friction Matters

I've been thinking about this since I built my Artificial Brain system. The whole point of it is to reduce friction — automate email triage, generate session summaries, manage tasks through voice commands. And it works. Brilliantly, most days.

But I've also noticed something uncomfortable: the sessions where I struggle most are the sessions where I learn most. When Claude misinterprets my intent and I have to rethink how I described the problem. When a subagent returns research that contradicts my assumption and I have to sit with the discomfort.

The struggle isn't a bug in the system. It's where the thinking happens.

The Real Divide

The Anthropic study frames this as "AI vs. no-AI." That's the wrong framing. The real divide is between people who use AI to skip the thinking and people who use AI to extend it.

A developer who accepts every Copilot suggestion without reading it is outsourcing cognition. A developer who uses Claude Code to generate a first draft and then spends 30 minutes understanding why each function was structured that way is using AI as a learning accelerator.

Same tool. Completely different outcome. The difference is whether you treat the friction as a problem to eliminate or a signal to engage with.

What This Means for Teams

If you lead a team that uses AI tools — and in 2026, that's most teams — the Anthropic study should change how you think about onboarding and skill development.

Don't ban AI for juniors. That's a losing battle and it misses the point. Instead, design the work so that understanding is required, not just output. Code reviews that ask "why did you choose this approach?" not just "does it pass tests." Pair programming where the human explains the AI's suggestions out loud. Documentation that requires synthesizing, not just copying.

The goal isn't to slow people down. It's to ensure that the speed doesn't come at the cost of comprehension.

Where I Land

I use AI more than almost anyone I know. I run my entire professional life through Claude Code. And I think the Anthropic study is right — AI can make people worse. But it doesn't have to.

The study measured what happens when you give people a tool and no guidance on how to use it. That's not an AI problem. That's a design problem. The same way a calculator makes students worse at arithmetic if they never learn why long division works.

The skill isn't using AI or not using AI. The skill is knowing when the struggle is the feature — and leaning into it instead of around it.