ModelHub XC Surpasses 1,000 Model Adaptations in Two Months, Strengthening Domestic AI Ecosystem

Stock News
Nov 27, 2025

Today (November 27), Fanxi Intelligence announced that its "ModelHub XC" platform has achieved a significant milestone—over 1,000 models have been successfully adapted and certified within just two months of launch, surpassing the original target by four months. This marks a major leap in the breadth of China's domestic AI ecosystem, providing a robust foundation for industrial intelligent transformation.

ModelHub XC now supports a diverse range of models, from general-purpose large language models (e.g., DeepSeek V3.1) and vertical-domain specialized models (e.g., wind tunnel computational models) to cutting-edge innovations (e.g., GPT-OSS-20B, MiniMax-M2). Its expanding functionality and ecosystem are driving synergy between domestic AI software and hardware.

Key milestones highlight the platform’s rapid progress:

**September 22, 2025: Launch & Industry Challenge Addressal** As AI adoption grows, a critical bottleneck emerged: hardware-software incompatibility and fragmented model ecosystems. Fanxi Intelligence launched ModelHub XC alongside a dedicated community and adaptation services to bridge gaps between users, developers, and computing resources. Founder Dai Wenyuan and Chief Scientist Chen Yuqiang collaborated with leading domestic chipmakers like Huawei Ascend, Biren Technology, and Moore Thread to kickstart the initiative.

**October 17, 2025: Breakthrough in Vertical Models** The platform achieved full adaptation and optimization of a complex wind tunnel computational model on the domestic XiWang S2 chip, matching international GPU performance (1.5 seconds per image processing) and setting a commercial-ready benchmark.

**November 1, 2025: Cutting-Edge OCR Model Adaptation** DeepSeek-OCR, an innovative vision-text model, was successfully adapted for Ascend and MetaX chips. It demonstrated equivalent output quality to NVIDIA platforms with ≤30% performance variance, leveraging EngineX’s Transformer architecture support for efficient inference.

**November 17, 2025: High-Efficiency Agent Model Deployment** MiniMax-M2, a top-tier open-source Agent model (230B MoE architecture), achieved plug-and-play deployment on Ascend 910B4 via EngineX optimizations, enabling robust enterprise AI applications.

**November 25, 2025: Batch Adaptation Milestone** A single-day adaptation of 108 models on Moore Thread GPUs showcased the platform’s scalability, covering text generation, visual understanding, and multimodal tasks while optimizing memory and speed through hardware quantization.

**Core Capabilities** - **EngineX-Driven Efficiency**: Enables "plug-and-play" model deployment across domestic chips, slashing compatibility barriers. - **Diverse Ecosystem**: Supports 1,000+ models across major domestic chips (e.g., Huawei, Cambricon, Moore Thread). - **Transparent Labeling**: Clear hardware compatibility tags streamline model-chip matching. - **End-to-End Support**: A 100+ engineer team ensures seamless adaptation via value-added services.

**Future Roadmap** With the "1,000-model" target achieved ahead of schedule, ModelHub XC aims to scale to 10,000 models within a year, accelerating China’s self-sufficient AI infrastructure through continuous iteration and broader hardware support.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10