LLM Comparison
MiniMax M2.5 vs Devstral 2 2512
Side-by-side specs, pricing & capabilities · Updated April 2026
Add to comparison
2/6 modelsSame tier:
M MiniMax M2.5 | D Devstral 2 2512 | |
|---|---|---|
| Organization | MiniMax | Mistral AI |
| OpenTools Score | ||
| Family | MiniMax | Devstral |
| Status | Current | Current |
| Release Date | Feb 2026 | Dec 2025 |
| Context Window | 197K tokens | 262K tokens |
| Input Price | $0.12/M tokens | $0.40/M tokens |
| Output Price | $0.99/M tokens | $2.00/M tokens |
| Pricing Notes | Cache read: $0.0590/M tokens | Cache read: $0.0400/M tokens |
| Capabilities | textcode | textcodetool-use |
| Max Output | 66K tokens | — |
| API Identifier | minimax/minimax-m2.5 | mistralai/devstral-2512 |
| View MiniMax M2.5 | View Devstral 2 2512 |
Cost Calculator
Enter your expected monthly token usage to compare costs.
| Model | Input | Output | Total / mo | vs Best |
|---|---|---|---|---|
| MiniMax M2.5Cheapest | $0.12 | $0.50 | $0.61 | — |
| Devstral 2 2512 | $0.40 | $1.00 | $1.40 | +128% |
MiniMax
MiniMax M2.5
MiniMax M2.5 is a large language model from MiniMax. Supports up to 196,608 token context window. Available from $0.12/M input tokens.
Mistral AI
Devstral 2 2512
Devstral 2 2512 is a large language model from Mistral AI. Supports up to 262,144 token context window. Available from $0.40/M input tokens.
More Comparisons
Looking for more AI models?
Browse All LLMs