Shanghai-based AI firm MiniMax has released its latest model, MiniMax-M1, which it describes as the world's first open-source large-scale hybrid architecture inference model.
The company claims M1 outperforms domestic closed-source models and approaches the capabilities of leading global systems in complex productivity tasks, while offering what it calls the best value in the industry.
M1 supports up to 1 million tokens of context input—matching Google's Gemini 2.5 Pro and eight times more than DeepSeek R1—and can generate outputs of up to 80,000 tokens, the longest among current open-source models, according to the company.