Meituan Releases High-Efficiency Inference Model with Performance Approaching GPT5 in Some Tasks

Deep News
09/22

MEITUAN-W officially released its high-efficiency inference model LongCat-Flash-Thinking on the afternoon of September 22nd. The new model maintains the characteristic speed of the LongCat series while achieving state-of-the-art (SOTA) performance among global open-source models in reasoning tasks across multiple domains including logic, mathematics, coding, and intelligent agents. Performance in some tasks approaches that of the closed-source GPT5-Thinking model.

Additionally, LongCat-Flash-Thinking has enhanced capabilities for autonomous tool invocation by intelligent agents and expanded formal theorem proving abilities, making it the first large language model in China to combine both "deep thinking + tool calling" and "informal + formal" reasoning capabilities. The development team noted that the new model demonstrates significant advantages particularly in handling high-complexity tasks such as mathematics, coding, and intelligent agent tasks.

Currently, LongCat-Flash-Thinking has been fully open-sourced on HuggingFace and Github, and is available for testing on the official website.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10