Use Case Discovery & Prioritization
Up to 87% of AI projects never reach production. The most common reason is not technical failure — it is starting from the wrong question. Teams focus on what the technology can do rather than what the business needs. Effective AI use case discovery starts with business strategy and works backward to technology. This page provides a structured process for identifying, evaluating, and prioritising AI opportunities.
Start With Strategy, Not Technology
Before any brainstorming session, your team needs clear answers to three questions:
- What are your top 3 strategic priorities for the next 12 months? Use cases that do not connect to these are low-value by definition.
- Where are the largest gaps between current performance and targets? AI delivers value by closing a measurable gap — not by existing.
- Which processes create the most friction for customers or employees? High-friction, high-volume processes are the richest discovery ground.
A use case that answers all three — it is strategic, it closes a measurable gap, and it removes high friction — is a strong candidate. A use case that answers none is a distraction.
Discovery Methods
Stakeholder Interviews
Interview line-of-business leaders with a consistent script: What takes the most time in your team? What decisions do you make repeatedly? What do you wish you could know faster? Where do errors occur most? Frontline staff often have better use case ideas than executives.
Process Mapping
Map end-to-end workflows and mark where: (a) high-volume repetitive judgment occurs, (b) humans wait for information, (c) errors are caught late and reworked, (d) experts spend time on non-expert tasks. These intersections are AI opportunity hotspots.
Competitive & Peer Benchmarking
Review what competitors and industry peers are deploying. Analyst reports (Gartner, Forrester, McKinsey) publish sector-specific AI use case taxonomies. Fast-follower use cases carry lower risk than pioneering because the pattern has been validated elsewhere.
Data Audit
Identify what data you already have at scale. Available data constrains possible AI use cases — and reveals opportunities. A company with millions of customer service transcripts already has training material for a support automation model.
Evaluation Criteria: What Makes a Use Case Strong?
After generating a longlist of candidates, evaluate each against a consistent set of criteria. The goal is to remove subjective enthusiasm and apply structured scoring.
| Criterion | What to Assess | Score 1–5 |
|---|---|---|
| Strategic alignment | Does it connect to a top-3 priority? | 5 = direct link |
| Business impact | Revenue, cost, risk, or experience value at stake | 5 = £10M+ impact |
| Feasibility | Data availability, technical complexity, time to value | 5 = fast and cheap |
| Data readiness | Volume, quality, labels, accessibility | 5 = data ready now |
| Risk level | Regulatory, reputational, operational risk if AI fails | 5 = very low risk |
| Change readiness | Will users adopt it? Is there a sponsor? Is the workflow stable? | 5 = ready to go |
Multiply or sum scores to rank candidates. The top quartile becomes your shortlist. Weight criteria by your organisation's current situation — a regulated business may weight risk higher; a startup may weight speed to value highest.
The ICE Framework
For faster scoring, the ICE framework collapses prioritization to three dimensions:
I — Impact
How significant is the benefit if this use case succeeds? Scale by financial value, strategic importance, or number of people affected.
C — Confidence
How certain are you it will work? Higher if peers have deployed it, lower if it requires novel AI capability or unprecedented data quality.
E — Ease
How easy is it to build and deploy? Lower for greenfield data infrastructure; higher for wrapping an existing workflow with an LLM API call.
Score each 1–10, multiply I × C × E, rank descending. Simple, fast, defensible.
Portfolio Balance: Quick Wins + Bets
A healthy AI portfolio is not just a ranked list of top-scoring use cases. It balances three types:
Quick Win (now)
Delivers measurable value within weeks. Builds internal credibility for AI investment. Keeps sponsors engaged. Examples: LLM-assisted email drafting, meeting summarisation, first-draft document generation.
Capability Builder (medium-term)
Develops the data pipeline, evaluation muscle, or MLOps infrastructure needed for more ambitious use cases. Slower ROI but unlocks future use cases. Examples: building a labelled dataset, deploying evaluation infrastructure, creating a prompt management system.
Strategic Bet (long-term)
High-value, high-complexity. Creates competitive differentiation if it works. Requires sustained investment and tolerance for longer payback periods. Examples: customer-facing AI product, proprietary model trained on internal data, AI-native process redesign.
Common Discovery Traps
Technology-First Discovery
"We have this model, what can we do with it?" Almost always leads to solutions looking for problems. Start with the business problem, not the technology.
Executive-Only Input
Senior leaders identify strategic problems; frontline staff identify day-to-day friction. The best quick-win use cases typically come from people doing the work, not those managing it.
Ignoring Data Reality
Scoring impact and feasibility highly without checking whether the required data actually exists. Data discovery must run in parallel with use case discovery, not after.
All Bets, No Quick Wins
Portfolios weighted entirely toward long-term strategic bets run out of organisational patience before they deliver value. Include at least one near-term win to maintain executive and user trust.
Checklist: Do You Understand This?
- What are the three strategic questions to answer before beginning use case discovery?
- List three discovery methods and what type of use cases each tends to surface.
- What six criteria does a robust use case evaluation scorecard include?
- Explain the ICE framework and when you would use it over a full scorecard.
- What is the difference between a quick win, a capability builder, and a strategic bet?
- Why is technology-first discovery a trap, and how do you avoid it?