Meituan Unveils LongCat-2.0-Preview for Testing with Trillion-Plus Parameters

Deep News
04/24

Meituan's new-generation foundational large model, LongCat-2.0-Preview, has been opened for testing. The model's total parameter scale has surpassed one trillion, placing it among the world's top-tier large models. According to the official website, during the testing period, LongCat-2.0-Preview will provide users with a daily quota of 10 million free tokens.

It has been revealed that in terms of both total parameters and activated parameters, Meituan's LongCat-2.0-Preview is largely consistent with the new-generation DeepSeek V4 large model released on the same day. The total parameter count determines the upper limit of a large model's knowledge capacity and its storage costs. Both large models support a 1 million context window, enabling them to process input equivalent to millions of characters in a single inference session, matching the processing scale of the newly released GPT-5.5. Furthermore, the new LongCat model has been deeply optimized for Agent application scenarios, effectively adapting to production environments such as code generation, complex task planning, and enterprise automation.

Beyond the parameter scale, a more significant breakthrough for Meituan's new-generation foundational model lies in its entire training and inference process being completed using domestic computing power clusters. It has been disclosed that the number of computing cards utilized during Meituan's training phase ranged between 50,000 and 60,000, making this the largest large model training task completed on domestic computing resources to date.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10