The Editorial Layer Is Where AI Stops Being a Summary Tool

The Editorial Layer Is Where AI Stops Being a Summary Tool
Copic-style illustration of two parallel conference tracks — a green sustainability track and a blue AI track — with a hand drawing a red dashed line between them. Caption reads 'THE LINE NOBODY DREW'.

A few weeks ago I noticed ITB Berlin had uploaded all 130 of their 2025 talks to YouTube. Travel is my world — I run an agency called Mogul that builds for tourism boards and operators — so this should have been gold. I started clicking through and realised immediately I couldn't watch a third of it. Nobody does.

So I built a thing to watch it for me. That thing became confdigest.io. Five conferences live now, including ITB Berlin, Phocuswright, SXSW, and Evolve Digital Toronto 2026. I soft-launched it on LinkedIn this morning.

The interesting part of this project isn't the pipeline. The pipeline is boring infrastructure, and that's the whole point. What made the thing worth building — and what I think most AI-summary tools are missing — is the editorial layer sitting on top.

Seven two-hour sessions

Start to finish, this was seven two-hour sessions. Day one was the full scaffold — Next.js 15, Supabase with pgvector, full schema, 6-phase pipeline, Dockerfile, the lot. Day two was extracting 135 ITB transcripts in parallel with Claude Code subagents and then redesigning the entire site from a dark "AI dashboard" aesthetic into a light editorial one. Then three more build sessions to add multi-conference support, research generation, and content packages. Then one long launch-polish session — twenty commits, fixing accessibility, security, analytics, domain setup, markdown rendering, chat UX. And a tidy-up session after.

Claude Code did most of the typing. I did the arguing about what to build.

The base pipeline

Here's what happens when I point the system at a new conference:

  • 01-scrape-videos.ts pulls every video off the YouTube channel via the Data API
  • 02-transcribe.ts grabs the captions, npm package first, Python fallback for the tricky ones
  • Ten Claude Code subagents run in parallel and extract a 400–600 word summary, 5–12 specific topics, 5–12 concrete takeaways, and a format classification per video
  • Categories get derived from the topics
  • A chunker splits transcripts into ~800-character paragraphs, OpenAI embeds them into pgvector
  • Total cost: about a dollar per conference, because the Claude Code work runs on my Max subscription, not the API

At runtime, the RAG chat embeds your query, runs a hybrid vector + full-text search, hands the top chunks to Claude Sonnet, and streams an answer with citations back. Standard stuff. You could build this in a weekend.

Comparing this to previous things I had done with N8N and make.com, the speed with just Claude Code is amazing.

If that was all the site did, I would have lost interest in a week.

The editorial layer is where it got interesting

I started to think about what I would actually want from a conference — if someone who'd been there was sitting across from me, talking me through it. Not a summary. A point of view.

It was enlightening how the system could look at the full conference and give a cross-track perspective. One of the great early insights from Evolve Digital: one track talked about sustainability, the other talked about AI — and the tracks never crossed. Nobody on stage connected them. The editorial layer surfaced that immediately.

Three writing agents run after the pipeline finishes. They don't just summarise — they argue.

Themes — one agent reads the full corpus of summaries and topics in a single context window and extracts exactly ten cross-cutting meta-themes. Each one comes with a confidence rating, 3–5 supporting sessions cited by title, and — this is the part that matters — 1–2 pieces of counter-evidence. If the theme can't survive a "but actually, these sessions disagreed" check, the agent has to say so.

Hypotheses — another agent generates ten to fifteen testable claims. Not observations. Claims like "AI will replace 40% of CMS content workflows by 2028". Then it pressure-tests each one against the sessions and issues a verdict: supported, partially supported, insufficient evidence, or contradicted. With supporting evidence, contradicting evidence, and an assessment paragraph.

Stakeholder briefs — this one is my favourite. Before writing, the agent analyses the corpus to figure out who this specific conference actually serves. Then it asks me to approve the list before writing. Evolve Digital Toronto got seven briefs — content strategists, marketing leaders, technical directors, and so on. Each brief is 2,000–4,000 words structured like an analyst report: exec summary, key findings, strategic implications, action items, top sessions to watch.

None of this is hard prompting. What's hard is making the prompts refuse to behave like a content marketer. Every theme must have counter-evidence. Every hypothesis must have a verdict. Every takeaway must be concrete — "use structured content" passes, "embrace the future" does not. Those constraints are what make the output read like a human editor wrote it instead of an LLM.

An accidental B2B product

I built the content package generator as an afterthought. I was sitting in the audience at Evolve Digital Toronto 2026 — I gave a talk there about building a second brain in Claude Code — and thought, the organiser could use everything I've already indexed.

So I added a fourth agent. For every session, it generates fifteen pieces of marketing content in the organiser's voice: a 1,200–1,800 word SEO blog post, an event recap, a speaker spotlight, LinkedIn posts in two registers, a Twitter thread, Instagram and Facebook captions, pull quotes, key stats, FAQ entries, a newsletter blurb, hashtags, CTAs.

For Evolve Digital 2026 — 25 sessions — the pipeline produced 384 markdown files. A ZIP, a CSV, a Notion-importable folder. The brand voice config lives in one JSON file, so every piece reads like the same editor wrote it.

The interesting thing wasn't the volume. It was how the editorial layer and the marketing layer reinforced each other. The hypotheses gave the blog posts their arguments. The briefs gave the LinkedIn copy its audience framing. The themes became the anchor for the "here's what the conference was really about" narrative.

It turned a side project into a thing I could plausibly sell to a conference organiser as a deliverable. I haven't. But I could.

The design came from the browser, not Figma

I skipped Figma entirely for the V2 redesign. I've been building up my own UX and UI Claude skills for a while, but for this project I combined them with MCP servers into Mobbin and Nicelydone. That let me describe a design style and atmosphere in plain English, pull real examples from thousands of live SaaS interfaces, and use those as the direction — instead of starting from a blank Figma canvas.

From there it was move-from-dark-to-light, DM Serif Display for headlines, and the idea of colored "BigBox" sections — blue, yellow, green, purple, dark — as editorial blocks. Claude Code built each component straight into the live site.

The refine loop was the best part. I'd click around as a user — self user testing — then come back with updates: swap the fonts, check accessibility, play with different nav patterns. All faster than iterating in Figma. A week later I swapped the fonts again to IBM Plex Serif and Plus Jakarta Sans because DM Serif was too heavy. Ten-second decision, one commit.

For a solo project, that loop beats mockups. There's no handoff step. The design is the code.

The caveat: I wouldn't do this for a client build where multiple stakeholders need to agree before implementation. Mockups exist for a reason — they're cheap veto checkpoints. But I'm one person and the only stakeholder is the site. So the browser is the canvas.

What I'm not sure about yet

I don't know if anyone wants this. The soft launch went up on LinkedIn four hours ago. PostHog is wired in, the daily digest email starts tomorrow morning, and I'll see what the numbers do. If the answer is "nothing interesting," that's also a finding.

I also don't know whether the content package is a product or a feature. Right now it's a private script that runs against one conference at a time. Turning it into something a conference organiser could buy means pricing, packaging, some kind of deliverable review process. Real work.

What I do know is this: the sites that will matter in this wave of AI aren't the ones with the best search. They're the ones with a point of view. The pipeline is table stakes. The editorial layer is the product.