Since 2025, the generative AI trend has gradually shifted from competing on technology to competing on applications. However, after more than half a year, while various industries have seen numerous application scenarios emerge, the "iPhone moment" for enterprise-level AI applications still seems distant. The reasons are twofold: enterprises lack sufficient internal data for training and fine-tuning large models, and the industry still has a long way to go in terms of computing power utilization and heterogeneous multi-element computing integration.
**Rise of the Intelligent Computing Industry**
Since the advent of ChatGPT, generative AI has become a global focus in the tech industry, and intelligent computing power, which serves as the underlying support for generative AI, has also entered an era of rapid growth. According to data from the Ministry of Industry and Information Technology, as of Q1 2025, China's intelligent computing scale has reached 748EFlops, accounting for 35% of the overall computing power scale, making intelligent computing the core driver of computing power growth.
IDC is equally optimistic about China's intelligent computing industry development. IDC data shows that in 2025, China's intelligent computing power scale will reach 1037.3EFLOPS, expected to reach 2781.9EFLOPS by 2028; China's general computing power scale will reach 85.8EFLOPS, expected to reach 140.1EFLOPS by 2028. IDC China Vice President Zhou Zhengang previously indicated that from a growth trend analysis, China's intelligent computing power is expected to achieve a five-year compound annual growth rate of 46.2% from 2023-2028, while general computing power is expected to reach 18.8%. By 2028, China's intelligent computing power scale is expected to reach 2781.9EFLOPS, nearly four times that of 2024.
The data clearly shows that currently, the vast majority of incremental demand for computing power will be generated in intelligent computing. From a server shipment perspective, intelligent computing has become the mainstream trend in computing power industry development. According to IDC data, the global artificial intelligence server market reached $125.1 billion in 2024, expected to grow to $158.7 billion in 2025, and potentially reach $222.7 billion by 2028, with generative AI servers' share rising from 29.6% in 2025 to 37.7% in 2028.
This massive demand primarily stems from the explosion of generative AI technology and applications. In the first half of 2024 alone, China's GenAI IaaS market grew 203.6% year-over-year, reaching a market size of 5.2 billion yuan, accounting for 35.6% of the overall intelligent computing services market.
As generative AI technology gradually matures, the industry is also gradually shifting from "competing" on model technology to "competing" on AI applications. Under this trend, large model technology evolution has also shifted from the pre-training side to the inference side and post-training side.
Huang Shan, Strategic Management Director of Lenovo Group China's Infrastructure Services Group, stated that the industry's computing power demand has shifted from pre-training dominance to post-training and inference dominance. Additionally, at the application level, AI has begun intelligently transforming enterprise processes. Especially after DeepSeek's emergence, vertical applications in fields such as healthcare, life sciences, and transportation continue to emerge, with application-side requirements for post-training and inference becoming higher.
"For the computing power industry, significant changes have occurred from the application layer to the computing power layer," Huang Shan further pointed out. "During this transformation, the interactions between application layer, model layer, and computing power layer software are substantial."
**Super-Intelligent Convergence is Key**
From a macro perspective, super-intelligent convergence not only fully leverages the advantages of various computing powers and achieves efficient resource utilization, but also provides strong support for solving complex scientific problems and driving industrial upgrading.
The biggest difference between supercomputing, intelligent computing, and traditional data centers lies in their application scenarios. Supercomputing is mainly applied in large-scale scientific computing, engineering simulation, weather forecasting, bioinformatics, and other fields. These applications need to process massive amounts of data and high-complexity calculations, requiring extremely high computational performance. Intelligent computing is mainly applied in artificial intelligence, machine learning, image processing, voice recognition, and other fields. These applications require rapid iteration and model optimization, with higher requirements for computational efficiency. Compared to supercomputing and intelligent computing, traditional data centers have broader applications, including cloud computing, big data analysis, and enterprise-level applications.
The convergence of supercomputing, intelligent computing, and general computing has become a key development direction for the entire computing power industry for a considerable period ahead.
Specifically, in terms of computing power, super-intelligent convergence can improve computing efficiency and reduce energy costs. Through super-intelligent convergence, computing resources can be flexibly scheduled according to specific task requirements, avoiding resource waste. For example, in weather prediction, supercomputing provides high-performance computing support for complex model simulation, intelligent computing performs data analysis and result optimization, and general computing handles daily business data, with the three collaborating to enhance efficiency.
In applications, super-intelligent convergence can promote scientific research and industrial innovation, providing powerful tools for solving complex scientific problems. For example, the National Supercomputing Center in Wuxi's super-intelligent convergence computing platform system provides full-stack intelligent application computing solutions for new drug development and enterprise-level intelligent enhancement systems.
Huang Shan also introduced some of Lenovo Group's experiences and cases in super-intelligent convergence. He stated that in Lenovo Group's collaboration with Peking University, both parties jointly built a high-performance computing platform. The platform uses Lenovo Group's developed model as the foundation, adopts the Lenovo Group DeepTeng X8810 system, and is configured with Intel Xeon Platinum processors used in Lenovo Group data centers, providing support for large-scale data processing and large-scale scientific computing across various disciplines.
"Taking life science research scenarios as an example, nuclear magnetic resonance imaging results that previously required 15 minutes to complete can now be completed in about 20 seconds with this platform's support," Huang Shan introduced.
Additionally, in collaboration with Geely Automotive, Lenovo Group and Intel jointly created an HPC cluster solution for Geely. This solution upgraded the "Xingrui Intelligent Computing HPC Cluster" through Lenovo Group's Wanquan Heterogeneous Intelligent Computing Platform HPC version and Intel's fifth-generation Xeon Scalable processors, improving R&D efficiency and shortening product iteration cycles.
"In simulation clusters with up to 5,000 CPUs, through the heterogeneous computing platform, integrating HPC and intelligent computing capabilities," Huang Shan further pointed out, "currently, this solution has implemented over 19 simulation applications."
**Heterogeneous Computing Integration Still Faces Challenges**
The convergence of supercomputing and intelligent computing first manifests in deep reconstruction of hardware architecture. Traditional supercomputing centers on CPUs, focusing on double-precision floating-point operations (FP64), while intelligent computing relies on GPU/TPU acceleration chips, focusing on half-precision (FP16) and integer operations (INT8). The hardware architectures and computing paradigms of both have fundamental differences.
This architectural difference creates bottlenecks in AI for Science scenarios. For example, protein structure prediction requires simultaneous processing of high-precision molecular simulation and data-driven model optimization. Additionally, supercomputing's "time complexity" is difficult to reconcile with intelligent computing's "space complexity." Convergence requires software-hardware collaborative innovation from chip design and storage networks to algorithm levels, such as solving computing power scheduling problems through dynamic heterogeneous resource pooling technology.
Therefore, super-intelligent convergence currently remains a relatively challenging problem.
Huang Shan pointed out that the convergence of supercomputing and intelligent computing currently faces some difficulties, mainly because the computing power scheduling mechanisms differ under these two computing modes. Convergence scheduling mechanisms will become the primary problem that needs to be solved first in achieving super-intelligent convergence.
Additionally, Huang Shan indicated that algorithms are also a major constraint in achieving convergence development of supercomputing and intelligent computing. "Because supercomputing and intelligent computing algorithms are completely different, during computation, how to combine the calculation results from both sides and then perform convergence calculations is also one of the difficulties in achieving super-intelligent convergence," Huang Shan emphasized.
Facing the convergence challenges of supercomputing, intelligent computing, and general computing, platform-based solutions seem to be the optimal solution under current conditions. Taking Lenovo Group as an example, it has constructed the Wanquan Heterogeneous Intelligent Computing Platform covering general, scientific, and AI computing power through its "one horizontal, five vertical" strategic framework.
Huang Shan stated that the Wanquan Heterogeneous Intelligent Computing Platform is Lenovo Group's overall handle for AI solutions. Through years of accumulation in intelligent computing and HPC computing hardware, plus industry know-how in software-hardware collaboration, it has improved computing power utilization and optimized software-hardware compatibility, supporting model applications upward and integrating heterogeneous computing power downward.
At the software level, Lenovo Group achieves seamless connection between HPC and AI frameworks through Wanquan Heterogeneous Intelligent Computing Platform 3.0. Its AI compilation optimizer adopts operator fusion and path optimization technology, reducing training and inference costs by over 15% each. Expert parallel communication algorithms target MoE architecture communication bottlenecks, reducing inference latency threefold through computation-communication collaborative optimization, with network bandwidth utilization reaching 90%.
At the hardware level, Lenovo Group continues to expand long-term cooperation with renowned domestic and international partners through the Wanquan Heterogeneous Intelligent Computing Platform. Data shows that Wanquan Heterogeneous Intelligent Computing Platform 3.0 has achieved compatibility with most domestic chips, providing 15x speedup compared to traditional CPU clusters after compatibility.
At the model level, the Wanquan Heterogeneous Intelligent Computing Platform has integrated mainstream open-source large models currently available in the market, including DeepSeek, and can provide corresponding adaptation and optimization for different architectural models such as MoE and LLM.
As Huang Shan mentioned earlier, the Wanquan Heterogeneous Intelligent Computing Platform has become Lenovo Group's core handle for enterprise-level AI implementation.
The layout in heterogeneous computing platforms and integrated large model capability platforms is not unique to Lenovo Group. From the layouts of common software and hardware service providers in the market, it can be seen that the combination of platform-based software products with hardware is currently the optimal solution for improving hardware usability. Around this, Digital China has launched the Shenzhou Literature Platform, Inspur Information has launched the EPAI Platform, and QingCloud has launched the AI Intelligent Computing Platform.
Comprehensively, promoting the convergence of supercomputing, intelligent computing, and general computing through platform-based products and ecosystem construction has become an unstoppable trend. The convergence of supercomputing, intelligent computing, and general computing is not only a choice of technical paths but also an inevitable path in the AI era.
Through technical breakthroughs in hardware heterogeneity, software collaboration, and network ubiquity, as well as deep applications in research, industrial, medical, and other fields, super-intelligent convergence is reconstructing computing paradigms and driving productivity leaps. In the future, as endogenous intelligent computing systems mature, the boundaries between supercomputing and intelligent computing will completely blur, forming a new form of "super intelligent computing" and opening a new era of human cognition and innovation.