MEITUAN-W officially released its high-efficiency inference model LongCat-Flash-Thinking on the afternoon of September 22nd. The new model maintains the characteristic speed of the LongCat series while achieving state-of-the-art (SOTA) performance among global open-source models in reasoning tasks across multiple domains including logic, mathematics, coding, and intelligent agents. Performance in some tasks approaches that of the closed-source GPT5-Thinking model.
Additionally, LongCat-Flash-Thinking has enhanced capabilities for autonomous tool invocation by intelligent agents and expanded formal theorem proving abilities, making it the first large language model in China to combine both "deep thinking + tool calling" and "informal + formal" reasoning capabilities. The development team noted that the new model demonstrates significant advantages particularly in handling high-complexity tasks such as mathematics, coding, and intelligent agent tasks.
Currently, LongCat-Flash-Thinking has been fully open-sourced on HuggingFace and Github, and is available for testing on the official website.