AI Intake to Govern Pipeline
Not all AI use cases carry equal risk. A chatbot that helps employees draft internal emails is not the same as an AI system that scores loan applications. A governance pipeline that treats all use cases identically will either over-govern low-risk work (creating bottlenecks) or under-govern high-risk work (creating liability). Risk-tiered governance applies proportionate oversight based on what is actually at stake.
Risk Tiers
| Tier | Criteria | Examples | Pipeline |
|---|---|---|---|
| Tier 1 — Low risk | Internal use only; no PII processing; no customer-facing output; uses approved models and catalog components; human reviews all outputs | Internal draft writing assistant; meeting summary tool; internal knowledge base search | Fast path: 3-stage simplified process; target 1-3 days |
| Tier 2 — Medium risk | Customer-facing but not decision-critical; processes non-sensitive data; outputs are informational; humans can override | Customer support chatbot (informational only); product recommendation; content generation for marketing | Standard pipeline: 6-stage process; target 1-2 weeks |
| Tier 3 — High risk | Influences consequential decisions (financial, medical, HR, legal); processes sensitive PII; customer-facing with limited human review; regulated by EU AI Act high-risk classification | Credit scoring AI; medical triage assistant; AI-assisted hiring; fraud detection with automated action | Full pipeline: all stages including legal review and staged rollout; target 4-8 weeks |
Pipeline Stages
| Stage | Who performs | Output / gate | Required for tier |
|---|---|---|---|
| 1. Intake | Business owner submits intake form | Use case registered; assigned to tier | All tiers |
| 2. Risk assessment | CoE triage (fast path) or full risk review | Risk tier confirmed; fast-path or full pipeline assigned | All tiers |
| 3. Technical design review | AI engineer reviews architecture, model choice, data flow | Design approved or revision requested | Tier 2, Tier 3 |
| 4. Security review | Security team reviews threat model, data handling, access controls | Security clearance or remediation items | Tier 2, Tier 3 |
| 5. Legal/compliance review | Legal reviews regulatory requirements, DPA, bias risk | Legal clearance; any required controls documented | Tier 3 only |
| 6. Build and evaluation | Product/engineering team builds; CoE evaluates quality against criteria | Evaluation report; pass/fail against success metrics | All tiers (scope proportionate to tier) |
| 7. Staged rollout | Engineering deploys to limited audience; monitors before expanding | Go/no-go for full rollout based on monitoring data | Tier 2, Tier 3 |
| 8. Ongoing review | CoE schedules quarterly reviews of production use cases | Use case remains approved; or changes trigger re-review | All tiers (annual for Tier 1; quarterly for Tier 3) |
Intake Form Template
AI Use Case Intake Form
Use case name: [SHORT DESCRIPTIVE NAME]
Business owner: [NAME + TEAM]
Business objective: [1-2 sentences — what problem does this solve?]
Model proposed: [e.g., Claude Sonnet 4.6 via Anthropic API]
Data touched: [What data will the AI process? Include PII categories if any]
User population: [Internal employees / specific customer segment / all customers]
Risk factors (check all that apply):
[ ] Processes personal data (PII)
[ ] Customer-facing output
[ ] Influences financial, medical, HR, or legal decisions
[ ] Agentic with tool calls or real-world actions
[ ] Uses a new model or vendor not currently approved
Success metrics: [How will you know this is working? What will you measure?]
Proposed launch date: [Target date]
Ongoing Governance
Use cases do not graduate out of governance
A common mistake is treating the governance pipeline as a one-time approval process. AI use cases need ongoing review because: (1) model providers update models under you; (2) use cases evolve beyond their original scope; (3) regulatory requirements change; (4) new risks emerge as usage patterns develop. Schedule quarterly reviews for high-risk use cases and annual reviews for low-risk ones. Any significant change to a use case (new model, expanded user population, new data types) triggers a re-review.
Checklist: Do You Understand This?
- What are the three risk tiers — and name two criteria that place a use case in Tier 3?
- Why does a governance pipeline that treats all use cases identically create problems?
- Which pipeline stages are skipped in the Tier 1 fast path — and what must be true for a use case to qualify?
- What six fields should appear on every AI intake form?
- What triggers a re-review of an already-approved use case?
- Classify these use cases: (a) an internal Slack bot that drafts meeting agendas; (b) an AI that flags suspicious transactions for fraud review; (c) a customer-facing chatbot that answers shipping questions with no account data access.