The Editorial Layer Is Where AI Stops Being a Summary Tool
How I built confdigest.io in seven two-hour sessions — and why the interesting part isn't the RAG pipeline, it's what happens after the transcripts are indexed.
My personal thoughts and observations on design, AI, innovation, and technology - a way for me to document and share insights as I navigate the ever-changing landscape of AI.
How I built confdigest.io in seven two-hour sessions — and why the interesting part isn't the RAG pipeline, it's what happens after the transcripts are indexed.
A polite permission dialog in my terminal revealed something bigger — how Anthropic is building AI products by watching power users invent features before shipping them.
The Ultra plan is not a pricing tier. It's a research programme — and you're the subject.
I once designed engagement features for a betting company. Years later, I see the same patterns in every social media app. This is what that feels like from the inside.
Training gets the headlines. Inference is the real energy story. And local models might be the answer nobody's talking about.
AI can handle 94% of software engineering tasks in theory. It handles 33% in practice. The gap between those numbers is where the real story lives.
I've been an Apple guy for 15 years. Then I started building things — and my phone started feeling less like home and more like a landlord.
AI removes the cost of doing more. It doesn't help you recognise enough. The skill isn't using AI — it's knowing when to stop.
Most people obsess over which AI model to use. The single most impactful thing they could change is a markdown file sitting in the root of their project.
Anthropic's study says AI makes developers worse. The headline misses the point — learning requires friction, and the real divide isn't AI vs no-AI.
I've tried every AI tool and second-brain app. Then I built an Artificial Brain with Claude Code that actually works. The soul document explains why.
AI can amplify motivation or destroy it. The technology is neutral. How you introduce it to your team is everything.
The travel industry loves the K-shaped economy narrative. Chase luxury, ignore budget. But the top line has a problem too — and nobody's talking about it.
In 10 years people will say: wait, every brand had a website AND an app? They did different things? How did we make it so hard?
AI-generated code is in production everywhere. Nobody wants to talk about what that means for quality, responsibility, and the developer's role.
Hallucinations aren't bugs. They're baked into how models are trained. A new paper from OpenAI shows a fix — and it raises bigger questions.
The stat is real. The panic is overblown. Here's what the MIT NANDA report actually says — and what leaders should take from it.
OpenAI has burned $13 billion with no clear path to profit. The utility is real — but the economics might not be. Here's what the numbers actually say.
I've spent $400 on Claude Code in the past month. That's a problem — not for me, but for Anthropic. The economics of AI coding tools are quietly breaking.
AI won't wipe out software. But it will expose every SaaS product that forgot about the user. Salesforce, I'm looking at you.
GPT-5 looks underwhelming on paper. But after testing it for coding, health queries, and daily use, I think OpenAI is playing a different game than we expected.
Every other LinkedIn post says consulting is dead. The reality is messier — and the real lesson is about what happens when product companies try to become service companies.
GPT-5 brings an end to the frustrating model-selection paralysis of earlier AI systems by introducing adaptive reasoning that intelligently adjusts to your needs—no more guessing which AI model is right for each task.
OpenAI’s GPT-5 delivers adaptive intelligence, superior coding, and competitive pricing—redefining benchmarks and usability for tech and design professionals.
We built an AI support agent that could process emails, create tickets, and draft replies. Then we hit the real problem: trusting it to run without us watching.
Apple's culture of polish and control is at war with AI's chaotic brilliance. The hesitation might be wisdom — or it might be a strategic blindspot.
When OpenAI’s $3B Windsurf deal unraveled, Google swooped in with a bold move. Explore the business intrigue, tech strategy, and what it means for the future of AI-driven development.
I tested Codex with a glass of wine and was confused. Then I watched the release videos and the penny dropped: this isn't for vibe coders. It's for tech debt.
Unit tests don't work when your software makes its own decisions. Here's what I've learned about testing autonomous AI systems in production.
For a decade, it was easier to pay $30/month than build a solution. AI just changed that equation — and the bloat is showing.
I studied AI at Glasgow University in 2001. The philosophical questions we debated then are now pressing concerns for society. Here's what practitioners need to know.
MIT study shows LLMs improve output but erode learning depth. Anton Morrison explains why process—not just results—matters in tech and UX research.
Vibe coding gets a bad rap. Here's how I use a three-artifact framework to ship complex apps with AI without losing my mind or my architecture.