Hermes3 8b
Context window
This model accepts 131K tokens in one request (~98K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 131,072 tokens (131K)
- Max output tokens
- 131,072 tokens (131K)
- Speed tier
- fast
- Provider
- Nous Research
- Input cost
- $0.025/M / 1M tokens
- Output cost
- $0.040/M / 1M tokens
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare Hermes3 8b
Open a side-by-side comparison with one click.
- Hermes3 8b vs Amazon Titan Text Express
Hermes3 8b has 212% larger context window
- Hermes3 8b vs Amazon Titan Text Lite
Hermes3 8b has 212% larger context window
- Hermes3 8b vs Amazon Titan Text Premier
Hermes3 8b has 212% larger context window
- Hermes3 8b vs Claude Instant
Hermes3 8b has 31% larger context window
- Hermes3 8b vs Anthropic Claude
Hermes3 8b has 31% larger context window
- Hermes3 8b vs Codellama 34b Instruct
Hermes3 8b has 3100% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Nous Research
Other models by Nous Research in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model