User Stories That Actually Get Built: Writing Specs AI Can Execute
Stop getting generic AI output. Learn the Spec Package format that produces working code on the first pass from Bolt.new and Claude Code.

DevForge Team
AI Development Educators

Why Traditional Stories Fail AI Tools
A user story is a communication tool. When it was invented in the 1990s, it communicated from a product owner to a human developer. The human developer could ask questions, apply judgment, and fill in the blanks from context.
AI coding tools are fundamentally different audiences. They do not ask clarifying questions during code generation. They do not apply soft contextual judgment. They fill every ambiguity with the most generic, statistically-average interpretation of your intent.
"Build a dashboard" produces a generic dashboard because "generic dashboard" is what the training data says a dashboard looks like.
"As a developer, I want a dashboard that shows real-time API health metrics with response time percentile charts, error rate trends, and a threshold alerting panel" — that produces something specific and useful.
The story IS the prompt. A well-structured user story, paired with acceptance criteria and a mockup description, is the most effective prompt format available for AI coding tools.
The Spec Package Format
A Spec Package is the complete input set that produces high-quality AI output on the first attempt. It has four elements:
Element 1: User Story — the intent
*As a [role], I want [goal], so that [benefit].*
Element 2: Acceptance Criteria — the behavior in Given/When/Then format
Five or more criteria covering: the happy path, validation, error states, edge cases, and empty states.
Element 3: Mockup Description — the appearance as structured text
Layout description, key components, responsive behavior, and all states (loading, empty, error, populated).
Element 4: Technical Constraints — the implementation
Tech stack, specific libraries, patterns to follow, performance requirements, and constraints.
From Story to Prompt: A Complete Example
Here is a feature with 3 user stories that demonstrates the format:
Stories:
- As a writer, I want to save article drafts so that I can continue writing later
- As a writer, I want to publish drafts when ready so that readers can see them
- As a reader, I want to browse published articles so that I can read content I care about
Assembled Spec Package for Bolt.new:
TECH STACK: React 18, TypeScript, Tailwind CSS, Supabase (auth + database)
DESIGN: Dark theme (#0F172A bg, #1E293B cards, #F59E0B accent, Inter font, 8px grid)
FEATURE: Article publishing system
Story 1 — Save Drafts:
As a writer, I want to save article drafts, so that I can continue writing later.
Given I am on the write page, when I click "Save Draft", then the article saves with status "draft" and I see "Saved" confirmation.
Given I navigate away, when I return to the write page, then my draft is pre-populated in the editor.
Story 2 — Publish:
As a writer, I want to publish my draft, so that readers can see it.
Given I have a saved draft, when I click "Publish", then the article status changes to "published" and it appears in the public feed.
Story 3 — Browse Articles:
As a reader, I want to browse published articles, so that I can find content to read.
Given I am on the home page, when I scroll the feed, then I see cards with title, author, date, and excerpt.
SCREENS: Write page (Monaco editor + title input + sidebar with save/publish), Home feed (article cards in 2-column grid), Article view (full content, author info, back button)
CONSTRAINTS: Supabase RLS — readers see only published articles; writers see their own drafts; no Redux.
This prompt level of detail produces a running application close enough to correct that you need only 1-2 targeted fixes, not a full redo.
The Iteration Difference
| Prompt Quality | First Output | Fixes Needed | Total Time |
|---|---|---|---|
| Vague ("build a blog") | Wrong shape | 3-5 full iterations | 2-3 hours |
| Spec Package | Right shape, minor details wrong | 1-2 targeted fixes | 30-45 minutes |
| Full SDD pipeline | Correct, traceable | Minimal | 1-2 hours with audit trail |
Writing Better Acceptance Criteria
Most developers write too few acceptance criteria and focus only on the happy path. Aim for at least 5 criteria per story covering:
- The primary happy-path action
- Validation (what happens with invalid input)
- The error state (what happens when something fails)
- The empty state (what the user sees with no data)
- A non-obvious edge case specific to your feature
These five criteria give AI enough signal to handle the full range of expected behavior.
For the full curriculum on user stories, see the Design & Specify tutorial series.