According to intelligence sources, a research report indicates that Hygon Information's opening of its CPU interconnect bus is expected to address the current fragmentation issues in the domestic computing chip ecosystem, potentially accelerating the unification of intelligent computing ecosystems. The Shuguang AI super cluster system, based on open architecture design, breaks down "technology and ecosystem barriers" and is positioned to fully leverage cluster performance advantages, accelerating the breakthrough of domestic computing chips into training scenarios.
Recommended stocks include: Cambricon-U (688256.SH), Hygon Information (688041.SH), SMIC (688981.SH), GigaDevice (603986.SH), and Centec Communications-U (688702.SH). Related stocks include: VeriSilicon (688521.SH).
**Hygon Information Opens CPU Interconnect Bus, Accelerating Intelligent Computing Ecosystem Unification**
According to industry sources, on September 13, 2025, Hygon Information opened its CPU interconnect bus to full-stack industry partners, including direct connection IP, communication protocols, and customized instruction sets. The opening of the CPU interconnect bus is expected to resolve current issues with inconsistent technical routes and fragmented system ecosystems among domestic computing chips, effectively enhancing computing scheduling capabilities between CPUs and acceleration cards, and fully unleashing chip computing performance. This move standardizes interface standards across the upstream and downstream supply chain, accelerating the construction of domestic computing clusters.
**Super Nodes Break Through Single-Card Computing Bottlenecks, Shuguang AI Super Cluster System Bridges "Technology and Ecosystem Barriers"**
According to official sources, on September 5, 2025, Shuguang released China's first super node based on AI computing open architecture design - the Shuguang AI Super Cluster System. This super cluster system features high performance capabilities, with a single cabinet housing 96 GPU cards and computing power reaching hundreds of PFlops, comparable to NVL72 (180 PFlops), supporting million-card ultra-large cluster expansion.
The system demonstrates high computational efficiency, with thousand-card cluster large model training and inference performance reaching 2.3 times the industry mainstream level, achieving coordinated computing, storage, and transmission, improving GPU computational efficiency by 55%. It offers high reliability through over 30-day long-term stable operation cluster reliability testing, with capability for automatic analysis of million-level component failures and second-level isolation.
The system is fully open, with hardware adapting to multi-brand AI acceleration chips and software compatible with mainstream AI computing ecosystems, achieving openness and sharing of multiple technical capabilities.
The analysis suggests that domestic computing super nodes based on open architecture are positioned to unify the domestic computing chip ecosystem, fully leverage cluster performance advantages, and accelerate the breakthrough of domestic computing chips into training scenarios.
**Risk Warning** Progress in adapting Hygon interconnect bus protocols may fall short of expectations.