Compare
Deepseek R1 0528 Tput vs Gemini 2.5 Pro
This page is context-first: how much text each model can take in one request. Full specs adds capabilities and limits; the pricing matrix below is only about $/million tokens from hosts that list both models.
Model
Deepseek R1 0528 Tput
Context window
128K
128,000 tokens · ~96K words
Model
Gemini 2.5 Pro
Context window
1M
1,000,000 tokens · ~750K words
Context window · side by side
Bar length is relative to the larger of the two windows (100% = max of this pair). This is not pricing.
Gemini 2.5 Pro has about 7.8× the context window of the other in this pair.
Gemini 2.5 Pro has 681% more context capacity (1000K vs 128K tokens). Deepseek R1 0528 Tput is 55% cheaper on input.
Quick verdicts
Short takeaways — validate with your own workloads.
Long document processing
Use Gemini 2.5 Pro. Its 1000K context fits entire documents without chunking (vs 128K).
RAG / high-volume retrieval
Use Deepseek R1 0528 Tput. Input tokens are 55% cheaper — critical when sending large retrieved contexts.
Full specs
Context, output, capabilities, and dates. Green highlights the favorable value where we compute a winner.
| Spec | Deepseek R1 0528 Tput | Gemini 2.5 Pro |
|---|---|---|
| Context window | 128,000 tokens (128K) | 1,000,000 tokens (1000K) |
| Max output tokens | N/A | 1,000,000 tokens (1000K) |
| Speed tier | Deep | Fast |
| Vision | No | Yes |
| Function calling | Yes | Yes |
| Extended thinking | No | Yes |
| Prompt caching | No | Yes |
| Batch API | No | No |
| Release date | N/A | Jun 2025 |
Pricing matrix
Dollar rates only: hosts that list both models, per 1M tokens. For how much text fits, use the context section above — not this table.
| Provider | Deepseek R1 0528 Tput in | Deepseek R1 0528 Tput out | Gemini 2.5 Pro in | Gemini 2.5 Pro out |
|---|---|---|---|---|
| Deepinfra | — | — | $1.25/M | $10.00/M |
| — | — | $1.25/M | $10.00/M | |
| Google Vertex | — | — | $1.25/M | $10.00/M |
| Openrouter | — | — | $1.25/M | $10.00/M |
| Together Ai | $0.550/M | $2.19/M | — | — |
Frequently asked questions
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model