MiniMax: MiniMax M2
Model Type
Proprietary Model
API access only
Recommended Use Cases
Text Generation
Try MiniMax M2
MiniMax M2 is an open-weight model optimized for agents and code, released in October 2025, offering top-tier coding and agentic capabilities at a fraction of the cost of comparable models.
A model born for Agents and code. At only 8% of the price of Claude Sonnet and twice the speed.
- MiniMax
Overview
MiniMax M2 is designed to break the "impossible triangle" of performance, price, and inference speed for AI agents. It delivers strong capabilities in programming, tool use, and logical reasoning while maintaining fast inference and low deployment costs. The model is available both via API and as open weights on Hugging Face.
Key Features
- Agent-first design: Built for end-to-end agentic workflows with stable long-chain tool calling
- Top-tier coding: Excels in Claude Code, Cursor, Cline, Kilo Code, Droid, and other coding tools
- Tool orchestration: Coordinates Shell, Browser, Python interpreter, and MCP tools
- 200K context window: Handles large codebases and extended conversations
- 128K output tokens: Supports long-form generation including chain-of-thought reasoning
- Open weights: Full model weights available on Hugging Face
Technical Specifications
| Specification | Value |
|---|---|
| Total Parameters | 229B |
| Context Length | 200K tokens |
| Max Output | 128K tokens |
| Architecture | Mixture-of-Experts (MoE) |
Role in Series
MiniMax text models serve different use cases:
- MiniMax M2: Agentic workhorse, open weights, best cost-efficiency (this model)
- MiniMax M2.1: Enhanced multi-language coding, higher capability
- MiniMax M2.1-lightning: Same as M2.1 with faster inference
- MiniMax M2-her: Role-play and conversation specialist
- MiniMax M1: Reasoning model with extended thinking