Reusable AI Components Catalog
Without a catalog, every team reinvents the same components: a customer service system prompt, a RAG pipeline configuration, an agent that calls external APIs. These reinventions introduce inconsistencies, skip security review, and waste engineering time. A catalog of pre-approved, tested components is the primary mechanism by which a CoE scales its standards across teams without becoming a bottleneck.
What Belongs in the Catalog
Catalog candidates
- Approved system prompts — tested, security-reviewed, versioned
- Reusable agent patterns — tool-calling patterns, retry logic, escalation flows
- Tested RAG configurations — chunking strategy, embedding model, retrieval parameters for specific data types
- Approved MCP server connections — each with scoped credentials and usage guidelines
- Evaluation harnesses — test sets and scoring functions for common task types
- Guardrail configurations — approved input/output filtering setups for common use cases
What does NOT belong in the catalog
- Experimental or untested components — catalog implies approval and reliability
- Use-case-specific components that will never be reused — adds noise
- Components without a named owner — will go stale without accountability
- Components that have not passed security review — catalog implies safety clearance
Catalog Entry Structure
Every catalog entry must contain enough information for a team to evaluate, integrate, and use the component correctly — without needing to ask the original author.
Catalog Entry: Customer Service System Prompt (Base)
---
ID: sys-prompt-customer-service-v3
Version: 3.2.1
Owner: Platform Team (Alice Chen)
Status: Approved
Last tested: 2026-02-15 (on Claude Sonnet 4.6)
Model compatibility: Claude Sonnet 4.6, Claude Haiku 4.5 (degraded quality)
Use case: Customer-facing support interactions for order inquiries,
returns, and general product questions.
What it handles:
- Polite, helpful tone for general queries
- Deflection of out-of-scope requests to human agents
- PII avoidance in responses
What it does NOT handle (must customise):
- Product-specific information (inject via RAG or system prompt extension)
- Refund policy (team-specific; add as appendix)
Known limitations:
- Does not handle multi-language; use translation layer or separate prompt
- May be overly cautious on edge complaints; tune escalation threshold
Security review: Passed 2026-01-10 (prompt injection resistance, PII handling)
Change log: v3.2.1 — improved escalation trigger; v3.0 — added PII avoidance block
Discoverability
A catalog no one can find is not a catalog — it is a file graveyard
The most common catalog failure is poor discoverability. Teams build their own components because they do not know the catalog exists, cannot find the relevant entry, or the catalog entries are too abstract to be recognisable as useful. Every catalog entry must be searchable by use case and problem description — not just by component name.
- Enforce catalog check as step 1 in the intake pipeline — "did you search the catalog first?"
- Tag entries by problem domain (customer service / document analysis / code review) and data type
- Make the catalog searchable by natural language description — teams search "summarise PDF documents" not "rag-config-v2"
- Publish a monthly digest of new or updated catalog entries to the community of practice channel
- Track catalog usage — which entries are actually being used; unused entries may indicate discoverability failure
Contribution Process
| Stage | What happens |
|---|---|
| 1. Proposal | Team opens a PR or issue: component name, use case, why it should be shared, who will own it |
| 2. CoE triage | CoE confirms the component is genuinely reusable; assigns a reviewer |
| 3. Technical review | AI engineer reviews component quality; checks for known failure modes; confirms model compatibility |
| 4. Security review | Security confirms no prompt injection vectors, no PII leakage risk, no excessive permissions |
| 5. Evaluation run | Component passes an eval suite (or proposer provides eval results); benchmark documented in entry |
| 6. Publication | Entry added to catalog with all required fields; announced in community channel |
Deprecation Policy
- Freshness policy: every catalog entry must be re-tested after a major model version change or after 6 months, whichever comes first
- Owner accountability: if an owner leaves without transferring ownership, the CoE marks the entry as "unmaintained" and seeks a new owner within 30 days before archiving
- Deprecation notice: deprecated entries remain visible for 60 days with a migration path before removal — teams need time to migrate
- Breaking changes: any change that alters existing behaviour requires a version bump and a migration guide; old version remains available for 90 days
- Archive vs delete: archive entries rather than deleting — teams may be running old versions and need to reference them
Checklist: Do You Understand This?
- What six fields must every catalog entry contain to be useful to a team adopting the component?
- Why must the catalog check be enforced as step 1 in the intake pipeline?
- What is the most common catalog failure mode — and what does it look like in practice?
- What triggers a re-test of an existing catalog entry?
- Why should deprecated entries be archived rather than deleted?
- Design the catalog entry structure for a reusable RAG configuration for processing legal contracts.