On October 27, domestic AI unicorn MiniMax unveiled and open-sourced its next-generation large language model, MiniMax-M2. The model secured a top-five global ranking and first place among open-source models on the authoritative Artificial Analysis (AA) benchmark, competing alongside Silicon Valley giants like OpenAI, Anthropic, and Google.
Notably, M2 demonstrates overwhelming cost advantages. Its API pricing is set at $0.3 (¥2.1) per million tokens for input and $1.2 (¥8.4) for output, while delivering inference services with approximately 100 TPS (tokens per second)—a figure still rapidly improving. This pricing represents just 8% of Claude Sonnet 4.5's cost, with inference speeds nearly twice as fast, ensuring high efficiency in large-scale deployments.
As global AI competition intensifies and commercialization becomes a key focus, MiniMax M2 has breached a critical barrier in high AI computing costs. This breakthrough signals Chinese AI firms' aggressive push into the global landscape with a "high-intelligence, low-cost" strategy.
Post-launch, M2 garnered widespread praise from international AI developers. Platform LMarena promptly recommended testing M2 on social media, while a Reddit tech influencer noted, "It scored 58.3% in benchmarks—a solid performance." CoreViewHQ CTO Ivan Fioravant remarked, "MiniMax-M2 performs exceptionally well! Even surpassing Claude 4.1 Opus in real-world use." Independent developers also extensively tested the API, sharing practical case studies in tech communities.