🧠 All Things AI
Advanced

Access Controls for AI Systems

AI systems with tool access are privileged actors in your infrastructure. An agent that can query a database, send emails, and call APIs has the combined access of all those tool permissions — and if manipulated through prompt injection or misconfiguration, will use that access in ways you did not intend. Least-privilege access design is not optional for production AI agents.

Least Privilege Applied to AI

Least privilege principles

  • Each tool the agent uses has its own scoped credential — not a single master key
  • Tool credentials are read-only unless write is explicitly required
  • Agent has access to only the data needed for the defined use case
  • Access is granted per use case, not per agent (same agent in different use cases gets different credentials)
  • All credentials are secrets-managed — never in source code or prompts

Common violations

  • One admin API key used for all agent tools — single compromise = full access
  • Agent has write access to data it only needs to read
  • Credentials hardcoded in system prompt or config files checked into git
  • Agent can query all database tables when it only needs one schema
  • No expiry on agent credentials — credentials from deprecated use cases still active

Identity Propagation Through Agent Calls

The user identity that initiated a request must flow through all tool calls the agent makes. Without this, tool call logs cannot be attributed to a user, and access controls on downstream services cannot apply per-user restrictions.

Identity propagation pattern:

1. Auth layer: authenticate user, issue JWT with user_id + permissions

2. Agent context: inject user_id into agent run context (not the prompt)

3. Tool execution: every tool call includes user_id in its context

4. Downstream services: apply row-level security / permission checks using user_id

5. Audit log: every tool call logged with user_id, timestamp, parameters

# Never pass user_id in the prompt — it can be overridden by prompt injection

# Pass it in the run context / metadata that the agent cannot modify

Passing user identity in the prompt (not the context) is a common pattern that creates a prompt injection vulnerability — the user can instruct the model to use a different user_id.

Row-Level Security for RAG

Users should only retrieve documents they are authorised to read. Filtering must happen at the retrieval layer — not in the prompt instruction — because prompt-based access control can be bypassed by prompt injection.

ApproachHow it worksLimitation
Metadata filter at query timeEach document tagged with allowed_roles or allowed_users; retrieval query includes filter on user's rolesRequires accurate metadata on all documents; filter must be applied server-side, not client-side
Separate namespace per access tierDifferent collections/namespaces for different security levels; agent queries only its authorised namespaceLess flexible; documents cannot span multiple access tiers
Pre-retrieval access checkAfter retrieval, verify each returned document against an authorisation service before passing to modelLatency cost; must handle partial results when some documents are filtered out

Service Account Hygiene for AI Agents

PrincipleImplementation
Dedicated service accountsOne service account per agent use case — not developer credentials, not shared agent accounts
Scoped permissionsGrant only the permissions required for the specific tools the agent uses
Short-lived credentialsUse credential rotation or dynamic secrets (Vault) rather than long-lived static API keys
Access review cadenceAI service accounts included in quarterly access review — same as human accounts; deprecated use cases revoked promptly
MCP credential scopingEach MCP server connection uses credentials scoped to that server only; no shared master credential across MCP servers

Checklist: Do You Understand This?

  • Why should each tool in an agent's toolkit have its own scoped credential — not a shared master key?
  • Why is passing user identity in the prompt a security vulnerability — and what is the safe alternative?
  • What is the correct layer for applying row-level security in a RAG system — and why not in the prompt?
  • Design an access control model for a customer support AI agent that can read the customer's account data but should never see other customers' data.
  • How frequently should AI service account access be reviewed — and what triggers an out-of-cycle review?
  • What is the security implication of a deprecated AI use case whose service account is not revoked?