MetaDeepSeek R1deep

R1 Distill Llama 70B

DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including: - AIME 2024 pass@1: 70.0 - MATH-500 pass@1: 94.5 - CodeForces Rating: 1633 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to lar

131K context·~98K words·131K max output
Context window131Ktokens
Max output131Ktokens

Context window

This model accepts 131K tokens in one request (~98K words of text).

Context window size131K tokens
4K32K128K1M10M

What fits in one request

  • Short document
    About 1,500 words of text
    Fits
  • Long document
    About 37K words of text
    Fits
  • Small codebase
    About 150K words of text
    Won't fit
  • Full novel
    About 375K words of text
    Won't fit

Specifications

Context size, pricing, and release info in one place.

Context window
131,072 tokens (131K)
Max output tokens
131,072 tokens (131K)
Speed tier
deep
Provider
Meta
Model family
DeepSeek R1
Release date
Jan 2025
Input cost
$0.200/M / 1M tokens
Output cost
$0.600/M / 1M tokens

Capabilities

See which features this model supports, such as vision, tools, and streaming.

Supported (2)
Extended thinking
Supported
Streaming
Supported
Not supported (6)
Vision
Not supported
Tool use
Not supported
Function calling
Not supported
Web search
Not supported
Batch API
Not supported
Prompt caching
Not supported

Best for

Jump to a guide or ranking that matches each workload.

Compare R1 Distill Llama 70B

Open a side-by-side comparison with one click.

Frequently asked questions

Short answers about context size and how this model behaves.

R1 Distill Llama 70B has a context window of 131K tokens (131,072 tokens). This covers most professional use cases including large code files, lengthy reports, and long conversation histories.

More from Meta

Other models by Meta in our catalog.

Powered by Mem0

Use a smaller model.
Get better results.

Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.

Example: a multi-turn chat session

Without Mem0~128K tokens sent
Full history
Repeated info
Old context
With Mem0~20K tokens sent
Key memories
Current turn

80% less to send — works with any model