Writerbalanced

Palmyra X5

Palmyra X5 is Writer's most advanced model, purpose-built for building and scaling AI agents across the enterprise. It delivers industry-leading speed and efficiency on context windows up to 1 million tokens, powered by a novel transformer architecture and hybrid attention mechanisms. This enables faster inference and expanded memory for processing large volumes of enterprise data, critical for scaling AI agents.

1.0M context·~780K words·8K max output
Context window1.0Mtokens
Max output8Ktokens

Context window

This model accepts 1.0M tokens in one request (~780K words of text).

Context window size1.0M tokens
4K32K128K1M10M

What fits in one request

  • Short document
    About 1,500 words of text
    Fits
  • Long document
    About 37K words of text
    Fits
  • Small codebase
    About 150K words of text
    Fits
  • Full novel
    About 375K words of text
    Fits

Specifications

Context size, pricing, and release info in one place.

Context window
1,040,000 tokens (1.0M)
Max output tokens
8,192 tokens (8K)
Speed tier
balanced
Provider
Writer
Release date
Jan 2026

Capabilities

See which features this model supports, such as vision, tools, and streaming.

Supported (1)
Streaming
Supported
Not supported (7)
Vision
Not supported
Tool use
Not supported
Function calling
Not supported
Extended thinking
Not supported
Web search
Not supported
Batch API
Not supported
Prompt caching
Not supported

Best for

Jump to a guide or ranking that matches each workload.

Compare Palmyra X5

Open a side-by-side comparison with one click.

Frequently asked questions

Short answers about context size and how this model behaves.

Palmyra X5 has a context window of 1M tokens (1,040,000 tokens). This million-token window can process entire codebases, long legal documents, or book-length texts in a single pass.

Powered by Mem0

Use a smaller model.
Get better results.

Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.

Example: a multi-turn chat session

Without Mem0~128K tokens sent
Full history
Repeated info
Old context
With Mem0~20K tokens sent
Key memories
Current turn

80% less to send — works with any model