🧠 All Things AI
Beginner

Requirements to Code

The most reliable workflow for AI-assisted software development starts with a specification, not a conversation. When you hand AI a clear, structured spec, it generates coherent code that stays consistent across files and features. When you describe requirements vaguely in chat, you get fragmented code that drifts. This page covers the spec-driven development workflow — how to turn product requirements into working code with AI as the implementer.

Why Vague Prompts Fail

Most AI coding failures are not model failures — they are specification failures. The model generates exactly what was described, which turns out to be incomplete, inconsistent with other parts of the system, or missing critical constraints. The fix is upstream.

What goes wrong with vague prompts

  • Each response makes independent assumptions — the fifth prompt contradicts the second
  • The AI fills gaps with plausible-but-wrong decisions (database schema, API shapes, error handling)
  • No shared context across a multi-file implementation — functions don't match their callers
  • Scope creep: AI adds features you didn't ask for and omits ones you expected
  • You can't review what you didn't specify — mistakes go unnoticed until runtime

What a spec provides

  • A single source of truth the AI refers back to throughout implementation
  • Explicit decisions so the AI doesn't invent them
  • A reviewable artefact — you can spot problems in the spec before a line of code is written
  • A boundary: AI implements what is in the spec, nothing more
  • A handover document for future maintainers (human or AI)

The Spec-Driven Development Workflow

  1. Write a product requirements brief — what the feature does, who uses it, what success looks like. 1–2 paragraphs. This is for you, not the AI.
  2. Generate a technical spec with AI — prompt the AI to convert your brief into a structured technical specification. Review and correct it before using it as the implementation input.
  3. Generate an implementation plan — ask AI to produce a step-by-step implementation plan (files to create, order of work, interfaces to define first). Review and approve.
  4. Implement one step at a time — give the AI the spec + the current step. Do not move to the next step until the current one builds and passes your tests.
  5. Review each output against the spec — check that the implementation matches the spec. Flag deviations; ask AI to correct them explicitly.
  6. Integrate and test — assemble the pieces; run your test suite; fix failures by pointing AI to the specific failing test + relevant spec section.

Writing an AI-Ready Technical Spec

Prompt to generate a spec from requirements:

Convert the following product requirements into a technical specification.

The spec should include:

- Feature overview (1 paragraph)

- Data models / schema (with field names, types, and constraints)

- API endpoints or function signatures (name, inputs, outputs, errors)

- Business logic rules (explicit — not implied)

- Out of scope (explicitly list what this spec does NOT cover)

- Open questions (things that need a decision before implementation)

Do not add features beyond what the requirements describe.

Flag any ambiguities as open questions rather than resolving them silently.

Requirements: [PASTE YOUR REQUIREMENTS]

Critical step: review the spec before coding

Read the generated spec as if you were going to implement it yourself. Every ambiguity in the spec becomes a wrong assumption in the code. Resolve open questions, correct wrong data types, add missing edge cases. A 30-minute spec review prevents hours of rework.

Tool Choice by Task

TaskBest toolWhy
Generating the spec from requirementsClaude, ChatGPTLong-context reasoning; good at structuring ambiguous text
Multi-file implementation from specClaude Code, Cursor AgentCan read the whole codebase and write across multiple files consistently
In-editor edits to existing codeCursor, GitHub CopilotInline edit and tab completion; sees surrounding file context
UI componentsv0 (Vercel), LovableTrained on UI patterns; generates visual components directly
Test generationClaude Code, CopilotReads the implementation and generates tests against it
Debugging a specific errorClaude, ChatGPT, Cursor ChatExplain the error + paste the stack trace + the relevant code

Implementation Prompt Patterns

Step implementation prompt:

Using the attached technical spec, implement Step 2: [STEP NAME].

Constraints:

- Match the data model defined in Section 2 of the spec exactly — do not add or remove fields

- Use [LANGUAGE/FRAMEWORK] with [SPECIFIC LIBRARIES already in use]

- Write a test for each function

- Do not implement anything from other steps — scope strictly to Step 2

Existing codebase context: [PASTE RELEVANT FILES OR DESCRIBE STRUCTURE]

Bug fix prompt pattern:

This test is failing: [TEST NAME]

Error: [PASTE EXACT ERROR MESSAGE AND STACK TRACE]

Relevant code: [PASTE THE FAILING FUNCTION AND ITS DEPENDENCIES]

The spec says this function should: [PASTE RELEVANT SPEC SECTION]

Fix only the specific error. Do not refactor other code.

Reviewing AI-Generated Code

You are responsible for the code even if AI wrote it. These are the most common places AI-generated code goes wrong:

Common AI code failures to check

  • Security: SQL injection, missing input validation, exposed secrets in code
  • Error handling: silently swallowed exceptions; no error returned to caller
  • Edge cases: null/empty inputs, concurrent access, large input sizes
  • Spec drift: field names or types that don't match the agreed schema
  • Unnecessary complexity: over-engineered solutions for simple requirements
  • Missing tests: tests that only cover the happy path

Efficient review checklist

  • Does the code match what the spec says, field-by-field?
  • Can you trace every input from its source to its use — is it validated?
  • What happens when each function fails? Is the error surfaced or swallowed?
  • Are tests meaningful, or do they just assert that the function runs?
  • Ask AI: "Review this code for security vulnerabilities and missing edge cases"

Checklist: Do You Understand This?

  • Why do vague prompts produce inconsistent code across a multi-file implementation?
  • What are the six sections a good AI-generated technical spec should contain?
  • What is the most important thing to do between generating a spec and starting implementation — and why?
  • When would you use Claude Code instead of Cursor, and vice versa?
  • Write a step implementation prompt for a function that saves a user profile to a database, given a spec that defines the user schema.
  • Name three common failure modes in AI-generated code that you should always check during code review.