September cohort open! Apply now

AI Can Write Your Essay—But It Might Also Rot Your Brain

Artificial intelligence can write a near-perfect essay in seconds. It can summarize studies, generate personas, and even brainstorm ideas. But as recent MIT research reveals, that convenience comes at a cost—one that tech professionals, especially in UX and product design, can't afford to ignore.

The Problem with Letting AI Think for Us

The MIT study titled "Your Brain on ChatGPT" explored what happens when people rely on large language models (LLMs) like ChatGPT to write essays. Researchers found something startling:

  • People who used LLMs couldn't remember what they wrote.
  • They didn't feel like they had written it.
  • Even after stopping LLM use, the cognitive damage lingered.

Let me be clear: these findings don't surprise me.

As I've long argued, AI can accelerate Design research but it cannot replace the process. The act of creating user personas or journey maps isn't primarily about the artifact itself. It's about the learning, the friction, and the collaboration that builds shared understanding.

Productivity vs. Depth: The Trade-Off

The study showed LLM users were 60% more productive. They experienced less frustration and finished tasks faster. But their comprehension? Weaker. Their memory? Poor. Their writing? Sounds polished, but feels soulless.

This encapsulates a tension many of us face. Do we want speed, or do we want depth? LLMs lower the cognitive load, but that's not always a good thing. When you stop struggling, you also stop learning.

In UX, it's no different. Speeding up research by generating insights with AI may feel efficient, but without deep engagement, the design loses its edge. You get generic personas. Predictable journeys. Insights that sound correct but don't change minds.

Remembering Matters—Even in Design

Researchers found that students who used only their brains or traditional search engines could recall and quote their essays far more accurately than those who used LLMs. The LLM group? 0% quoting accuracy.

That's not just a memory failure. It's a failure of integration. Ideas were never truly internalized.

This insight isn't new. Field Notes, the beloved notebook company, captured it perfectly in their motto:

"I'm not writing it down to remember it later, I'm writing it down to remember it now."

The act of documentation—whether with pen and paper or through research synthesis—has always been about encoding knowledge in our minds, not just storing it externally.

Image of my notebook from when I was frustrated with the beach towel.

In any thoughtful discipline—Design, UX, software engineering, product strategy—memory isn't trivia. It's how we build intuition. It's what allows us to connect dots across user interviews, usability tests, and analytics. Without it, we're just chasing outputs.

Top-Down vs. Bottom-Up: A Cognitive Shift

The study also showed a fascinating pattern in brain activity:

  • LLM users processed information top-down—from AI's suggestions to mental refinement.
  • Brain-only users built understanding bottom-up—constructing ideas from fragments, synthesizing them into meaning.

Why does this matter?

Because creativity, innovation, and insight are inherently bottom-up processes. Especially in UX, where discovering what's not said—and what's not obvious—is where the gold lies.

LLMs Create Echo Chambers and Homogeneity

One of the study's subtle but serious warnings: LLMs lead to sameness.

Essays written with LLMs were statistically homogeneous. Search engine users and brain-only writers showed far more creativity and diversity.

We see this in design tools today. Feed the same prompt into ten different LLM-based UX tools, and you'll get eerily similar personas or flows. It's efficient, yes—but not insightful.

Innovation doesn't live in the center of the bell curve. It lives at the edges.

Why Process > Artifact in UX Research

This is why I continue to emphasize an uncomfortable truth: AI can 100% fill the gap of no research. But it can't replace the value of doing the research.

When teams generate a persona from ChatGPT, they get a passable picture. But what they miss—

  • The debates about assumptions.
  • The surprise insights from interviews.
  • The cross-functional alignment on priorities.

—those are the things that shape great products.

I've seen teams go from directionless to laser-focused simply by arguing over the finer points of a journey map. That shared cognitive load is the work.

Use It—But Don't Lose It

So how should we use LLMs?

Yes 100% but here's what I recommend:

  • Start with your brain. Form your own hypotheses before letting AI confirm or deny them.
  • Use AI to accelerate, not originate. Let it support your work, not replace your thinking.
  • Always validate. LLMs hallucinate. Check sources, triangulate insights.
  • Protect your process. If an artifact matters to your team's learning, don't skip the human steps.

The Path Forward

The MIT study reminds us that AI is not neutral. It shifts how we think, how we remember, and what we value.

As LLMs become more embedded in our workflows, we must differentiate between output and understanding. Between faster and better. Between doing the work and knowing why it matters.

AI is a powerful assistant. But it's not a substitute for thinking. And in our field—where empathy, discovery, and clarity are everything—that distinction is critical.

Be curious. Use AI. Stay human.

My Take

This study is not suprising and one of the reasons I have always spoke about AI accelerating UX User Research not replacing it. The reason is the fact LLM can generate a User Persona or Journey is not the point of creating those artifacts. Its about what is NOT in the final asset over whats in it and what the team has learned in the process. AI can 100% fill the gap of no research.