Contractual Controls for AI Vendors
Contracts are the primary mechanism for allocating risk between your organisation and AI vendors. Standard enterprise software contracts were not designed for AI systems — they do not address model behaviour changes, training data use, or the unique liability profile of AI outputs. When a vendor deploys a model update that changes behaviour, or when an AI output causes harm, the contract determines whether you have recourse. Ensuring contracts contain the right provisions before signing is significantly easier than negotiating remedies after harm occurs.
Data Processing Agreement (DPA)
A DPA is mandatory under GDPR for any vendor that processes personal data on your behalf as a data processor. For AI vendors, a standard DPA template is often insufficient — the following AI-specific provisions must be explicitly negotiated:
- Training data prohibition: Explicit clause prohibiting the vendor from using your data (inputs, outputs, and logs) to train, fine-tune, or improve their models without separate written consent. Many standard terms permit this by default.
- Sub-processor obligations: List of approved sub-processors; requirement to notify and obtain consent before adding new sub-processors who process your data.
- Data retention and deletion: Maximum retention period; confirmed deletion of specific records on request (right to erasure); deletion logs provided on request.
- Data residency: Geographic restrictions on where data is stored and processed; restrictions on cross-border transfers if required by GDPR or sectoral regulations.
- Breach notification: 24–72 hour notification SLA for security incidents involving your data (not just the 72-hour GDPR reporting obligation, which applies to the controller).
- Records of processing: Vendor maintains required Article 30 records and provides evidence on request.
Audit Rights
Contractual audit rights give your organisation the ability to verify vendor compliance with their contractual obligations and regulatory requirements. For AI vendors, audit rights must cover both security and AI governance:
What audit rights should cover
- Right to receive current SOC 2 Type II report annually (not just "upon reasonable request" — which vendors routinely delay)
- Right to conduct or commission third-party security assessments with 30 days' notice
- Right to receive answers to security questionnaires within defined timescales
- Right to audit compliance with DPA obligations — specifically data retention and deletion
What vendors try to limit
- Substituting third-party reports for direct audit access — acceptable for Tier 1 cloud providers; insufficient for bespoke AI vendors
- Limiting audit scope to "information security only" — excluding data handling and model behaviour audit rights
- Imposing audit costs entirely on the customer — standard practice is shared cost for reasonable audit activity
AI-Specific SLA Provisions
Standard uptime SLAs do not capture the dimensions of AI service quality that matter most. Negotiate SLA provisions that cover:
| SLA dimension | What to specify | Why it matters for AI |
|---|---|---|
| Availability | 99.9% uptime per calendar month; measured per API endpoint; credits for breach | Standard — but specify that "degraded mode" (reduced capability) counts as partial downtime |
| Model version stability | Minimum notice period before model updates are deployed (e.g., 30 days); option to pin to a specific model version; deprecated versions available for defined period | Model updates can silently change output behaviour — critical for validated AI systems in regulated sectors |
| Latency | P99 latency at defined throughput; degradation notifications | Real-time AI applications (customer-facing chat, fraud detection) have hard latency requirements |
| Support response | Incident response SLA by severity; dedicated technical contact for Enterprise tier | AI incident response requires vendor cooperation to diagnose model-level failures |
Intellectual Property Clauses
IP ownership for AI outputs is legally unsettled and contractually critical. Ensure contracts address:
- Output ownership: The vendor should not claim ownership over outputs generated from your prompts and your data. Confirm that outputs are assigned to you (or at minimum, that the vendor has no claim).
- Input data ownership: Your data is yours. Confirm the vendor has no right to licence, publish, or use your inputs beyond providing the contracted service.
- Fine-tuned model ownership: If you fine-tune the vendor's model using your data, confirm you own the fine-tuned weights. Some vendors claim ownership of fine-tuned adaptors even when the training data is yours.
- IP indemnification: Does the vendor indemnify you against IP infringement claims arising from AI-generated outputs? Some enterprise contracts include this; consumer terms do not. Critical for code generation and content creation use cases.
- Copyright in training data: Some vendor contracts acknowledge the ongoing legal uncertainty about training data copyright and indemnify customers from related claims — negotiate this for high-volume generative AI use cases.
Exit Provisions
Vendor lock-in is a significant risk with AI systems, particularly for fine-tuned models and RAG-based systems that store proprietary data in vendor-controlled infrastructure:
Exit provisions to negotiate
- Data portability: export of all customer data in a standard, machine-readable format on request or contract termination
- Model export: for fine-tuned models trained on your data, export of model weights or adaptors
- Transition period: minimum 90-day transition period after termination notice during which data export and migration can occur
- Source code escrow: for critical AI systems, escrow of the code base with a third party — released if the vendor ceases operations
Lock-in risk indicators
- Vendor uses proprietary API format with no standard-compatible alternative
- Fine-tuned models cannot be exported from the vendor's platform
- Vector store or knowledge base is locked to the vendor's infrastructure
- No SLA for data export on termination — vendor can delay indefinitely
Checklist: Do You Understand This?
- What AI-specific provision must be added to a standard DPA to prevent training data use?
- What audit rights are insufficient for a high-risk AI vendor and what should you insist on instead?
- Why is model version stability an important SLA dimension for AI systems in regulated sectors?
- Who owns the fine-tuned model weights if you fine-tune a vendor's model using your data — and how is this determined?
- Name three exit provisions that reduce AI vendor lock-in risk.