Task Decomposition with AI
AI is effective at breaking large vague requirements into structured task lists, identifying missing sub-tasks you have overlooked, and mapping dependencies between tasks. Its output is a starting point, not a plan — you must validate that the decomposition matches your system's actual complexity, your team's context, and the work you have already done.
Epic to User Stories
Provide the business goal and known constraints. Without constraints, AI generates an idealised decomposition that ignores your actual technical and organisational reality.
Break the following epic into user stories. Each story should be independently deliverable and testable.
Epic: [DESCRIBE THE EPIC — e.g., "Users can invite team members to their workspace and manage their roles"]
Context:
- Existing system: [WHAT ALREADY EXISTS — e.g., "we have user auth; no team/role model exists yet"]
- Team size: [e.g., "2 backend, 1 frontend, 1 designer"]
- Non-goals for this epic: [e.g., "SSO integration, audit logs, advanced permissions"]
For each story:
- User story format: "As a [role], I want [action] so that [benefit]"
- Acceptance criteria (3-5 bullet points)
- Dependencies on other stories (which story must complete first)
- Complexity estimate: S / M / L (with one-line rationale)
After the stories: identify any technical tasks that are not user stories (infrastructure, migrations, schema changes).
The non-goals list is the single most important input. Without it, AI adds stories for features you have explicitly decided not to build in this iteration.
Task-Level Breakdown
Once you have user stories, AI can break a specific story into engineering tasks. Provide your tech stack — the tasks should be specific to what you are actually building with.
Break the following user story into engineering tasks. Each task should be completable in one sitting (2-4 hours maximum).
Story: [PASTE USER STORY]
Tech stack: [e.g., "Node.js / Express API, PostgreSQL, React frontend, hosted on AWS"]
Existing relevant code: [DESCRIBE WHAT EXISTS — e.g., "we have a User model and auth middleware; no invitation model exists"]
For each task:
- Task name (imperative verb, specific)
- Description: what gets built and why
- Definition of done: how you know the task is complete
- Layer: backend / frontend / database / devops / testing
Order tasks by implementation dependency (earlier tasks must complete before later ones can start).
The 2-4 hour scope constraint is important — tasks estimated beyond half a day are usually hiding multiple sub-tasks that will cause planning surprises.
Finding Missing Tasks
AI is good at identifying the categories of work that non-technical stakeholders forget and that developers often leave off until the end: migrations, error states, edge cases, monitoring, and documentation.
Review this task list for gaps. Identify work that is missing and would be discovered during or after implementation.
[PASTE YOUR TASK LIST]
Check these categories specifically:
- Error and edge case handling (what happens when things go wrong?)
- Database migrations and data changes
- Email or notification flows triggered by this feature
- Permission and access control changes
- Monitoring, logging, and alerting setup
- API documentation updates
- Feature flag or rollout strategy if gradual release is needed
- Cleanup or deprecation of replaced functionality
For each gap: is this required for launch, or can it be deferred? Explain briefly.
This gap review prompt catches the tasks that typically surface as last-minute additions during a sprint review.
Dependency Mapping
Given the following task list, identify all dependencies between tasks and flag the critical path.
[PASTE TASK LIST]
Output:
1. A dependency table: for each task, list which tasks must complete before it can start
2. The critical path: the sequence of dependent tasks that determines the minimum calendar time
3. Tasks that can be done in parallel (no dependency between them)
4. Any external dependencies outside the team (e.g., another team's API, a design that is not ready)
The parallel tasks list is actionable for sprint planning — it shows where you can assign multiple people without conflict.
What You Must Validate
AI does well
- Generating initial story and task structure from requirements
- Identifying commonly forgotten task categories (migrations, error states)
- Writing acceptance criteria from a story description
- Ordering tasks by implementation dependency
- Identifying parallel work that does not conflict
You must provide
- Complexity estimates — AI cannot know your codebase's actual debt
- Team capacity and who can work on what
- Business priority among stories
- Whether a dependency is real or can be decoupled with a stub
- Non-goals and out-of-scope decisions
Checklist: Do You Understand This?
- What is the single most important input in an epic-to-stories prompt — and why?
- Why should individual engineering tasks be scoped to 2-4 hours maximum?
- Name three task categories that AI's gap review prompt reliably surfaces that teams commonly forget.
- What is a critical path — and why does identifying it matter for sprint planning?
- Write a task breakdown prompt for a user story: "As an admin, I want to export a CSV of all users so that I can import them into our CRM."
- Why must you validate AI complexity estimates against your own judgement rather than using them directly?