Solar Pro 3
Solar Pro 3 is Upstage's powerful Mixture-of-Experts (MoE) language model. With 102B total parameters and 12B active parameters per forward pass, it delivers exceptional performance while maintaining computational efficiency. Optimized for Korean with English and Japanese support.
Context window
This model accepts 128K tokens in one request (~96K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 128,000 tokens (128K)
- Speed tier
- balanced
- Provider
- Upstage
- Release date
- Jan 2026
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare Solar Pro 3
Open a side-by-side comparison with one click.
- Solar Pro 3 vs Writer Palmyra X4
Same context window size
- Solar Pro 3 vs Amazon Nova Micro
Same context window size
- Solar Pro 3 vs Amazon Titan Text Express
Solar Pro 3 has 204% larger context window
- Solar Pro 3 vs Amazon Titan Text Lite
Solar Pro 3 has 204% larger context window
- Solar Pro 3 vs Amazon Titan Text Premier
Solar Pro 3 has 204% larger context window
- Solar Pro 3 vs Claude Instant
Solar Pro 3 has 28% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model