LLM Comparison
MiniMax M2.5 vs LFM2.5-1.2B-Thinking
Side-by-side specs, pricing & capabilities · Updated April 2026
Add to comparison
2/6 modelsSame tier:
M MiniMax M2.5 | L LFM2.5-1.2B-Thinking | |
|---|---|---|
| Organization | MiniMax | Liquid AI |
| OpenTools Score | ||
| Family | MiniMax | LFM2.5-1.2B-Thinking |
| Status | Current | Current |
| Release Date | Feb 2026 | Jan 2026 |
| Context Window | 197K tokens | 33K tokens |
| Input Price | — | — |
| Output Price | — | — |
| Pricing Notes | Free tier available on OpenRouter with rate limits | Free tier available on OpenRouter with rate limits |
| Capabilities | textcode | textcodeextended-thinking |
| Max Output | 8K tokens | — |
| API Identifier | minimax/minimax-m2.5:free | liquid/lfm-2.5-1.2b-thinking:free |
| View MiniMax M2.5 | View LFM2.5-1.2B-Thinking |
MiniMax
MiniMax M2.5
MiniMax M2.5 is a large language model from MiniMax. Supports up to 196,608 token context window. Available free with rate limits on OpenRouter.
Liquid AI
LFM2.5-1.2B-Thinking
LFM2.5-1.2B-Thinking is a large language model from Liquid AI. Supports up to 32,768 token context window. Available free with rate limits on OpenRouter.
More Comparisons
Looking for more AI models?
Browse All LLMs