How I Ship Enterprise Apps with AI — The Structured Art of Vibe Coding

Last month I shipped a complete learning management system for a tourism destination in Tahiti. Custom branding, multi-language support, agent certification workflows, analytics dashboard. The whole thing. A year ago, this would have been a 3-month project with a team of four. I did it in six weeks, mostly solo, with Claude Code as my co-pilot.
People call this "vibe coding." I think that's a bad name for what's actually happening.
The Problem with Vibe Coding's Reputation
Vibe coding gets dismissed as chaotic, unserious, or "non-engineered." And honestly, the way most people do it, that criticism is fair. Dump a vague prompt into an AI, accept whatever comes back, iterate through 30 back-and-forth messages until something kind of works. Ship it. Pray.
That's not engineering. That's gambling with extra steps.
But the problem isn't AI-assisted development. It's AI-assisted development without structure. Left unchecked, vibe coding becomes noise. With a framework, it becomes signal.
The Three Artifacts
Every project I build with AI starts with three documents:
The Spec — Defines what we're building. I don't write this alone. I co-author it with Claude. I feed it my rough notes and use a prompt that tells the AI to ask me one question at a time, building on each answer. The AI becomes interviewer, product manager, and sparring partner.
This process surfaces things I hadn't considered. Edge cases. User flows I'd assumed were obvious. Technical constraints I'd filed under "deal with later." The spec is the most underrated artifact in AI-assisted development because it's where the thinking happens.
The Blueprint — Defines how we build it. I pass the spec into Claude with a prompt asking for a step-by-step implementation plan broken into small, iterative chunks. Each chunk is small enough to complete in one focused session, big enough to make visible progress. Claude refines each chunk into granular tasks with testing strategies.
The To-Do List — The macro view. A markdown checklist generated from the blueprint. As I work through tasks, I check off completed items. This serves double duty: it keeps me on track and it serves as memory for future AI sessions. When I reload a conversation, Claude reads the checklist and knows exactly where we left off.
Why This Works
The framework hits the sweet spot between control and creativity.
On the Tahiti LMS, the spec alone took two hours of back-and-forth with Claude. By the time we started coding, every major decision was documented. The tech stack was chosen with rationale. The data model was sketched. Edge cases were listed and addressed.
This meant the actual coding sessions were fast and focused. No "wait, what should the auth flow look like?" No "actually, let's restructure the database." Those decisions were already made, captured in the spec, and available for Claude to reference in every session.
The blueprint broke six weeks of work into 40-something tasks, each with clear acceptance criteria. I didn't have to manage the project in my head. The markdown file managed it. And because Claude could read the file, it always knew what was done and what was next.
Research Is the Invisible Foundation
Before touching any code, I spend 15-20 minutes gathering context. What APIs are available? What constraints exist? What tools and libraries will we use? Documentation links, endpoint references, tech choices — all collected upfront.
AI performs best when it's anchored. Every vague prompt is an invitation for the model to guess. Every specific reference is an anchor that keeps the output on track. The research step isn't glamorous but it's the difference between "Claude, build me an auth system" (bad) and "Claude, build auth using NextAuth with Google and email providers, storing sessions in Postgres via Prisma, following the patterns in our existing user model" (good).
The Part Nobody Talks About
Here's the honest bit: this framework doesn't eliminate mistakes. Claude still writes bugs. It still misunderstands requirements sometimes. It still generates code that works in isolation but breaks when integrated.
But the framework makes those mistakes catchable and recoverable. When the spec is clear, you know immediately when the output deviates. When the blueprint has tests defined for each chunk, you catch problems early. When the to-do list tracks what's done, you don't accidentally redo work.
The framework doesn't make AI perfect. It makes AI manageable.
Where I Land
Vibe coding isn't the future of software development. Structured AI-assisted development is. The difference is whether you're working with AI as a co-architect or using it as a magic 8-ball.
The three-artifact approach — spec, blueprint, to-do list — isn't complicated. It takes maybe 3-4 hours of setup for a major project. And it saves weeks of the confusion, rework, and "what were we building again?" that unstructured AI development inevitably produces.
I've shipped four enterprise applications this way in the past six months. Tahiti. Grenada. Kenya. Fiji. Each one was faster than the last, not because the AI got better, but because the framework got tighter.
Structure meets intuition. That's how the work gets done.