China Mobile Research Institute Leads Development of Technical Requirements for Smart Computing Center Computing Power Pooling to Promote Heterogeneous Computing Integration

Deep News
Sep 12

China Mobile Research Institute recently led the compilation of the "National Integrated Computing Network - Smart Computing Center Computing Power Pooling Technical Requirements," collaborating with 28 organizations. This standard was officially published by the National Data Standardization Committee, providing strong support for the domestic construction of the national integrated computing network.

Data has become a new production factor and the core engine driving high-quality development of the digital economy. The construction of national integrated computing networks, data infrastructure, and trusted data spaces forms the essential foundation supporting China's digital strategy. China Mobile Research Institute actively responds to national policies by deeply participating in the development and release of multiple key technical standards, comprehensively supporting the construction of an autonomous, controllable, secure, and efficient national data foundation across computing security, data infrastructure, and smart computing center computing power pooling.

The national integrated computing network serves as core infrastructure for the digital economy, aiming to achieve efficient resource scheduling, green and low-carbon facilities, and flexible computing supply through integrating diverse heterogeneous computing resources. Currently, it faces two major challenges:

First, severe fragmentation of computing resources with low overall utilization rates. Differences in task types, scales, and priorities make it difficult to precisely match task specifications with hardware configurations. Large amounts of computing power remain idle due to "overqualified for small tasks" or "underqualified for large tasks" scenarios, urgently requiring improved overall utilization rates.

Second, significant difficulties in cross-architecture application deployment, limiting the use of diverse computing resources. Different hardware vendors customize software stack capabilities around their own smart computing hardware structures. Applications developed based on heterogeneous hardware use non-unified interface standards, restricting flexible deployment of applications across diverse intelligent computing resources.

These challenges severely constrain the large-scale application of domestic smart computing hardware and innovative development of industrial ecosystems, creating an urgent need to effectively aggregate different hardware computing resources into unified pooled resources, solving current pain points in smart computing center development and unleashing the collaborative value of computing power.

The smart computing center computing power pooling technical requirements focus on heterogeneous computing integration challenges. Through "abstraction," "virtualization" management, and "task-based" scheduling of cross-architecture computing resources within smart computing centers, they form unified and transparent "intelligent computing pools" to ensure flexible and efficient use of intelligent computing power.

The standard establishes unified computing resource abstraction models for smart computing center hardware, including devices, computing, and memory. It transforms different chips into abstract "XPUs" with standard structures and capabilities, achieving heterogeneous hardware difference shielding and providing standardized intelligent computing resource views for upper-layer scheduling.

Addressing severe resource fragmentation issues, the technical document proposes task-based scheduling and business orchestration requirements. Through fine-grained resource segmentation, dynamic elastic scaling, and intelligent matching scheduling, it achieves precise adaptation between task specifications and hardware configurations, reducing "overqualification" or "underqualification" phenomena, promoting improved computing utilization rates, and releasing idle computing value.

For the pain point of difficult cross-architecture application deployment, unified interface specifications and resource management mechanisms decouple smart computing hardware from applications. Application developers no longer need to concern themselves with underlying smart computing chip types, only needing to apply for "XPU standard computing power" as needed to run applications on any smart computing hardware capable of providing equivalent computing power, thus achieving free application migration.

This published technical document uses unified standards to break barriers between heterogeneous computing resources, activates the aggregated value of distributed computing power through pooling concepts, and promotes transformation from siloed domestic smart computing "small ecosystems" to open, collaborative "large ecosystems." It provides a clear "technical blueprint" for standardized construction of smart computing center computing power pooling, accelerating national integrated computing network construction and leading the intelligent computing industry toward a new era of high-quality development.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10