Models on Bedrock
Bedrock hosts models from multiple providers under one API. You reference each model by its ARN or model ID string. The same Converse API call works with any model — you swap the model ID and the rest of your code stays the same.
Anthropic Claude
Claude models are the most widely used third-party models on Bedrock. Anthropic and AWS have a deep partnership — Anthropic uses AWS infrastructure for training, and AWS gets early access to Claude models.
| Model | Bedrock Model ID | Best For |
|---|---|---|
| Claude Opus 4 | anthropic.claude-opus-4-5 | Complex reasoning, research, long documents |
| Claude Sonnet 4 | anthropic.claude-sonnet-4-5 | Balanced speed + quality, most production workloads |
| Claude Haiku 3.5 | anthropic.claude-haiku-3-5 | High-throughput, cost-sensitive, classification |
| Claude 3 Sonnet / Opus | anthropic.claude-3-* | Previous generation, still widely available |
Note: Extended thinking (budget tokens) and some streaming features may lag behind the direct Anthropic API by weeks to months. Check Bedrock's feature parity page before building features that depend on them.
Amazon Nova
Amazon's own model family, launched November 2024 (Nova 1) with Nova 2 following in 2025. Nova models are deeply integrated with Bedrock features and often the fastest to get new Bedrock capabilities.
Nova Micro
Text-only, ultra-fast, cheapest Nova. Best for classification, extraction, simple Q&A.
128K context
Nova Lite
Multimodal (text + images + video). Fast and cheap with visual understanding.
300K context
Nova Pro
Most capable Nova. Multimodal, strong reasoning, tool use, long context.
300K context
Nova 2 (2025) added Nova Premier (highest capability), Nova Canvas (image generation), and Nova Reel (video generation up to 2 minutes). Nova models are only available on Bedrock — not via any other API.
Meta Llama
Llama 3.1 (8B, 70B, 405B), Llama 3.2 (multimodal 11B, 90B + edge 1B, 3B), and Llama 3.3 70B are all available. Bedrock hosts these without you needing to manage the weights. Useful when you need an open model for compliance reasons (some orgs require open weights for auditability) but don't want self-hosted inference infrastructure.
Other Providers
Mistral
Mistral 7B, Mistral Large, Mistral Small. Strong on European compliance use cases, multilingual.
Cohere
Command R+ for RAG-optimised generation, Embed v3 for embeddings. Cohere embeddings integrate directly with Bedrock Knowledge Bases.
AI21 Labs
Jamba models (SSM-Transformer hybrid). Strong on long-context tasks with efficient memory use.
Stability AI
Stable Diffusion XL and SD3 for image generation. Bedrock wraps these under the same API surface.
How to Select
You must explicitly enable each model in the Bedrock console before using it — models are not active by default. Go to Model access in the AWS console and request access. Most models approve instantly; some (certain Claude versions) require a brief review.
In code, reference models by their model ID string (e.g. anthropic.claude-sonnet-4-5) or a cross-region inference profile ARN if you need capacity routing.
Checklist: Do You Understand This?
- What are the three Amazon Nova tiers and how do they differ?
- Why might a team choose Llama on Bedrock vs Claude on Bedrock?
- What must you do in the AWS console before calling a new model via the API?
- What is cross-region inference and when would you use it?