PerplexitybalancedVision

Sonar Pro Search

Exclusively available on the OpenRouter API, Sonar Pro's new Pro Search mode is Perplexity's most advanced agentic search system. It is designed for deeper reasoning and analysis. Pricing is based on tokens plus $18 per thousand requests. This model powers the Pro Search mode on the Perplexity platform. Sonar Pro Search adds autonomous, multi-step reasoning to Sonar Pro. So, instead of just one query + synthesis, it plans and executes entire research workflows using tools.

200K context·~150K words·8K max output
Context window200Ktokens
Max output8Ktokens

Context window

This model accepts 200K tokens in one request (~150K words of text).

Context window size200K tokens
4K32K128K1M10M

What fits in one request

  • Short document
    About 1,500 words of text
    Fits
  • Long document
    About 37K words of text
    Fits
  • Small codebase
    About 150K words of text
    Fits
  • Full novel
    About 375K words of text
    Won't fit

Specifications

Context size, pricing, and release info in one place.

Context window
200,000 tokens (200K)
Max output tokens
8,000 tokens (8K)
Speed tier
balanced
Provider
Perplexity
Release date
Oct 2025

Capabilities

See which features this model supports, such as vision, tools, and streaming.

Supported (4)
Vision
Supported
Extended thinking
Supported
Web search
Supported
Streaming
Supported
Not supported (4)
Tool use
Not supported
Function calling
Not supported
Batch API
Not supported
Prompt caching
Not supported

Best for

Jump to a guide or ranking that matches each workload.

Compare Sonar Pro Search

Open a side-by-side comparison with one click.

Frequently asked questions

Short answers about context size and how this model behaves.

Sonar Pro Search has a context window of 200K tokens (200,000 tokens). This large window is well-suited for long document analysis, extensive codebases, and multi-session agent workflows.

More from Perplexity

Other models by Perplexity in our catalog.

Powered by Mem0

Use a smaller model.
Get better results.

Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.

Example: a multi-turn chat session

Without Mem0~128K tokens sent
Full history
Repeated info
Old context
With Mem0~20K tokens sent
Key memories
Current turn

80% less to send — works with any model