LFM2.5-1.2B-Thinking (free)
LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is designed to provide higher-quality “thinking” responses in a small 1.2B model.
Context window
This model accepts 33K tokens in one request (~25K words of text).
What fits in one request
- FitsShort documentAbout 1,500 words of text
- Won't fitLong documentAbout 37K words of text
- Won't fitSmall codebaseAbout 150K words of text
- Won't fitFull novelAbout 375K words of text
Specifications
Context size, pricing, and release info in one place.
- Context window
- 32,768 tokens (33K)
- Speed tier
- deep
- Provider
- Liquid
- Release date
- Jan 2026
Capabilities
See which features this model supports, such as vision, tools, and streaming.
- Extended thinking
- Shows its chain-of-thought reasoning
- Supported
- Streaming
- Returns tokens as they are generated
- Supported
- Vision
- Accepts image inputs alongside text
- Not supported
- Tool use
- Can call external tools and APIs
- Not supported
- Function calling
- Structured function call interface
- Not supported
- Web search
- Can browse the web during a request
- Not supported
- Batch API
- Process many requests asynchronously
- Not supported
- Prompt caching
- Reuse repeated prompt prefixes cheaply
- Not supported
Best for
Jump to a guide or ranking that matches each workload.
Compare LFM2.5-1.2B-Thinking (free)
Open a side-by-side comparison with one click.
- LFM2.5-1.2B-Thinking (free) vs Amazon Titan Text Express
Amazon Titan Text Express has 28% larger context window
- LFM2.5-1.2B-Thinking (free) vs Amazon Titan Text Lite
Amazon Titan Text Lite has 28% larger context window
- LFM2.5-1.2B-Thinking (free) vs Amazon Titan Text Premier
Amazon Titan Text Premier has 28% larger context window
- LFM2.5-1.2B-Thinking (free) vs Claude Instant
Claude Instant has 205% larger context window
- LFM2.5-1.2B-Thinking (free) vs Anthropic Claude
Anthropic Claude has 205% larger context window
- LFM2.5-1.2B-Thinking (free) vs Codellama 34b Instruct
LFM2.5-1.2B-Thinking (free) has 700% larger context window
Frequently asked questions
Short answers about context size and how this model behaves.
More from Liquid
Other models by Liquid in our catalog.
Powered by Mem0
Use a smaller model.
Get better results.
Mem0 gives your AI long-term memory so you stop re-sending context on every call. That means you can use a smaller, faster, cheaper model — and still get better answers.
Example: a multi-turn chat session
80% less to send — works with any model