Trinity Mini (free)
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with robust function...
Context window
This model accepts 131K tokens in one request (~98K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 131,072 tokens (131K)
- Speed tier
- fast
- Provider
- Arcee Ai
- Release date
- Dec 2025
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare Trinity Mini (free)
Open a side-by-side comparison with one click.
- Trinity Mini (free) vs Amazon Titan Text Express
Trinity Mini (free) has 212% larger context window
- Trinity Mini (free) vs Amazon Titan Text Lite
Trinity Mini (free) has 212% larger context window
- Trinity Mini (free) vs Amazon Titan Text Premier
Trinity Mini (free) has 212% larger context window
- Trinity Mini (free) vs Claude Instant
Trinity Mini (free) has 31% larger context window
- Trinity Mini (free) vs Anthropic Claude
Trinity Mini (free) has 31% larger context window
- Trinity Mini (free) vs Codellama 34b Instruct
Trinity Mini (free) has 3100% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Arcee Ai
Other models by Arcee Ai in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model