Z AibalancedVisionTool use

GLM 4.6V

GLM-4.6V is a large multimodal model designed for high-fidelity visual understanding and long-context reasoning across images, documents, and mixed media. It supports up to 128K tokens, processes complex page layouts and charts directly as visual inputs, and integrates native multimodal function calling to connect perception with downstream tool execution. The model also enables interleaved image-text generation and UI reconstruction workflows, including screenshot-to-HTML synthesis and iterativ

131K context·~98K words·33K max output
Context window131Ktokens
Max output33Ktokens

Context window

This model accepts 131K tokens in one request (~98K words of text).

Context window size131K tokens
4K32K128K1M10M

What fits in one request

  • Short document
    About 1,500 words of text
    Fits
  • Long document
    About 37K words of text
    Fits
  • Small codebase
    About 150K words of text
    Won't fit
  • Full novel
    About 375K words of text
    Won't fit

Specifications

Context size, pricing, and release info in one place.

Context window
131,072 tokens (131K)
Max output tokens
32,768 tokens (33K)
Speed tier
balanced
Provider
Z Ai
Release date
Dec 2025
Input cost
$0.300/M / 1M tokens
Output cost
$0.900/M / 1M tokens
Cached input
$0.055/M / 1M tokens

Capabilities

See which features this model supports, such as vision, tools, and streaming.

Supported (6)
Vision
Supported
Tool use
Supported
Function calling
Supported
Extended thinking
Supported
Streaming
Supported
Prompt caching
Supported
Not supported (2)
Web search
Not supported
Batch API
Not supported

Best for

Jump to a guide or ranking that matches each workload.

Compare GLM 4.6V

Open a side-by-side comparison with one click.

Frequently asked questions

Short answers about context size and how this model behaves.

GLM 4.6V has a context window of 131K tokens (131,072 tokens). This covers most professional use cases including large code files, lengthy reports, and long conversation histories.

More from Z Ai

Other models by Z Ai in our catalog.

Powered by Mem0

Use a smaller model.
Get better results.

Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.

Example: a multi-turn chat session

Without Mem0~128K tokens sent
Full history
Repeated info
Old context
With Mem0~20K tokens sent
Key memories
Current turn

80% less to send — works with any model