Nemotron Nano 9B V2
NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so.
Context window
This model accepts 131K tokens in one request (~98K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 131,072 tokens (131K)
- Speed tier
- fast
- Provider
- Nvidia
- Release date
- Sep 2025
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare Nemotron Nano 9B V2
Open a side-by-side comparison with one click.
- Nemotron Nano 9B V2 vs Amazon Titan Text Express
Nemotron Nano 9B V2 has 212% larger context window
- Nemotron Nano 9B V2 vs Amazon Titan Text Lite
Nemotron Nano 9B V2 has 212% larger context window
- Nemotron Nano 9B V2 vs Amazon Titan Text Premier
Nemotron Nano 9B V2 has 212% larger context window
- Nemotron Nano 9B V2 vs Claude Instant
Nemotron Nano 9B V2 has 31% larger context window
- Nemotron Nano 9B V2 vs Anthropic Claude
Nemotron Nano 9B V2 has 31% larger context window
- Nemotron Nano 9B V2 vs Codellama 34b Instruct
Nemotron Nano 9B V2 has 3100% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Nvidia
Other models by Nvidia in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model