Mistral Nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
Context window
This model accepts 131K tokens in one request (~98K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 131,072 tokens (131K)
- Max output tokens
- 4,096 tokens (4K)
- Speed tier
- balanced
- Provider
- Mistral
- Release date
- Jul 2024
- Input cost
- $0.150/M / 1M tokens
- Output cost
- $0.150/M / 1M tokens
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare Mistral Nemo
Open a side-by-side comparison with one click.
- Mistral Nemo vs Amazon Titan Text Express
Mistral Nemo has 212% larger context window
- Mistral Nemo vs Amazon Titan Text Lite
Mistral Nemo has 212% larger context window
- Mistral Nemo vs Amazon Titan Text Premier
Mistral Nemo has 212% larger context window
- Mistral Nemo vs Claude Instant
Mistral Nemo has 31% larger context window
- Mistral Nemo vs Anthropic Claude
Mistral Nemo has 31% larger context window
- Mistral Nemo vs Codellama 34b Instruct
Mistral Nemo has 3100% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Mistral
Other models by Mistral in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model