System Assembly Emerges as New Driver for AI Server Upgrades: Focus on Foxconn Industrial Internet, Lenovo and Four Key Beneficiary Stocks

Stock News
09/30

Recently, a research report indicated that while manufacturing process upgrades drive chip performance improvements, advanced packaging has become another driving force for chip performance enhancement. Against the backdrop of slowing manufacturing process upgrades, increasing the area of individual dies can increase transistor count, thereby improving computing power. However, limited by lithography machine design and mask area factors, the reticle limit is approximately 800-900mm², while NVIDIA's H100 single die area is already at this limit range.

NVIDIA began adopting dual-die co-packaging advanced packaging technology in the B200, achieving integration of 208 billion transistors within a single package, more than double the H100's 80 billion transistors. According to NVIDIA's technology roadmap, the Rubin Ultra's single package will integrate 4 dies, achieving 100PF FP4 computing power per card.

System assembly will become a new driving force for AI server performance improvement. While wafer manufacturing process upgrades and advanced packaging meet the performance upgrade needs of personal computers and smartphones, they may still lag behind the growth in AI computing demand and the rapid development requirements of AI server performance. We believe system assembly is becoming a new driver for performance improvement.

GPU count in AI servers has upgraded from the conventional 8 GPUs per server to 72 GPUs per cabinet, and will upgrade to 144 GPUs in the 2027 VR Ultra NVL576 cabinet (each GPU package contains 4 GPU dies, totaling 576 GPU dies). The increase in GPU count brings significant improvements in cooling requirements, substantially increasing system assembly difficulty. For example, GB200 NVL72's capacity ramp-up is constrained by system assembly complexity.

Industry leading companies possess technological advantages and are expected to benefit from increased industry barriers and improved competitive environment in system assembly.

Regarding investment targets, the following AI server system assembly related stocks are maintained:

**Foxconn Industrial Internet Co.,Ltd.**: In GB200 series product testing, Q2 showed significant optimization and improvement compared to Q1; system-level cabinet debugging time significantly shortened, automated assembly process introduction effectively improved overall production and delivery pace. The company has expanded capacity at multiple global facilities and deployed fully automated assembly lines. The company expects GB200 Q3 shipments to continue strong growth momentum. Main orders come from North American large cloud service providers, while simultaneously advancing sovereign customer and brand customer projects. Expected to maintain quarterly improvement this year. GB300's single unit profit has potential to exceed GB200, expected to become an important profit support point for the company's AI server business next year.

**Hygon Information Technology Co.,Ltd.**: Through merger with Sugon, expected to form vertical integration capabilities including CPU, DCU, and system assembly.

**LENOVO GROUP**: NVIDIA previously indicated that partners including Lenovo are expected to launch various servers based on Blackwell Ultra starting from the second half of 2025.

**Huaqin Technology Co.,Ltd.**: Core ODM supplier for domestic well-known internet companies' AI servers, with full-stack shipments of switches, AI servers, and general servers, benefiting from downstream cloud vendors' capital expenditure expansion.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10