Meituan Releases High-Efficiency Inference Model with Performance Approaching GPT5 in Some Tasks

Deep News
Sep 22

MEITUAN-W officially released its high-efficiency inference model LongCat-Flash-Thinking on the afternoon of September 22nd. The new model maintains the characteristic speed of the LongCat series while achieving state-of-the-art (SOTA) performance among global open-source models in reasoning tasks across multiple domains including logic, mathematics, coding, and intelligent agents. Performance in some tasks approaches that of the closed-source GPT5-Thinking model.

Additionally, LongCat-Flash-Thinking has enhanced capabilities for autonomous tool invocation by intelligent agents and expanded formal theorem proving abilities, making it the first large language model in China to combine both "deep thinking + tool calling" and "informal + formal" reasoning capabilities. The development team noted that the new model demonstrates significant advantages particularly in handling high-complexity tasks such as mathematics, coding, and intelligent agent tasks.

Currently, LongCat-Flash-Thinking has been fully open-sourced on HuggingFace and Github, and is available for testing on the official website.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10