GLM 4.7 Flash
As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.
Context window
This model accepts 200K tokens in one request (~150K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- FitsLong documentAbout 37K words of text
- FitsSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 200,000 tokens (200K)
- Max output tokens
- 32,000 tokens (32K)
- Speed tier
- fast
- Provider
- Z Ai
- Release date
- Jan 2026
- Input cost
- $0.070/M / 1M tokens
- Output cost
- $0.400/M / 1M tokens
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Vision
- Accepts image inputs alongside text
- Supported
- Tool use
- Can call external tools and APIs
- Supported
- Function calling
- Structured function call interface
- Supported
- Extended thinking
- Shows its chain-of-thought reasoning
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare GLM 4.7 Flash
Open a side-by-side comparison with one click.
- GLM 4.7 Flash vs Amazon Titan Text Express
GLM 4.7 Flash has 376% larger context window
- GLM 4.7 Flash vs Amazon Titan Text Lite
GLM 4.7 Flash has 376% larger context window
- GLM 4.7 Flash vs Amazon Titan Text Premier
GLM 4.7 Flash has 376% larger context window
- GLM 4.7 Flash vs Claude Instant
GLM 4.7 Flash has 100% larger context window
- GLM 4.7 Flash vs Anthropic Claude
GLM 4.7 Flash has 100% larger context window
- GLM 4.7 Flash vs Codellama 34b Instruct
GLM 4.7 Flash has 4782% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Z Ai
Other models by Z Ai in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model