OCP Conference Focus: Manufacturing and Packaging Expanded Significantly; AI Chip Bottlenecks Shift to Downstream, Including Memory, Racks, Power Supply, and More

Deep News
2025/10/21

In 2026, the AI semiconductor industry is expected to experience another robust year of growth; however, the investment logic in the AI hardware sector is undergoing a profound shift. On October 20, a Morgan Stanley research report pointed out that for the past two years, the market focus has been primarily on upstream capacity bottlenecks, such as TSMC’s CoWoS packaging and advanced process technologies.

However, recent statements from NVIDIA and TSMC, along with signals released during the 2025 OCP Conference, indicate that this situation has changed. The manufacturing and packaging stages of chip production have undergone large-scale expansions and are no longer the core constraints on AI development. Morgan Stanley emphasizes that the true bottleneck is shifting downstream, focusing on supporting infrastructure such as data center space, power supply, liquid cooling, high-bandwidth memory (HBM), server racks, and optical modules.

The report suggests that this shift in bottlenecks means investment opportunities are spreading from upstream wafer foundries and packaging to a broader downstream supply chain. In the future, data centers unable to secure sufficient power and physical space may fall behind in the competition for AI computing power.

Previously, there were doubts about whether AI chips could be supplied adequately, with TSMC's CoWoS advanced packaging capabilities viewed as a critical bottleneck. However, the latest developments in the industry reveal a significant improvement in this regard. TSMC disclosed in its recent earnings call that "the demand for AI is even stronger than we imagined three months ago," and the company is working to "close the supply-demand gap." Notably, TSMC also stated that the lead time for expanding CoWoS capacity is only six months, providing great flexibility on the supply side.

Although the front-end production capacity for its advanced nodes, such as 4nm and 3nm, remains tight, AI semiconductors clearly have a higher priority than cryptocurrency ASICs or Android smartphone SoCs. NVIDIA CEO Jensen Huang also recently clarified that semiconductor capacity is no longer the limiting factor it once was. After a surge in demand a few years ago, the manufacturing and packaging stages of the entire supply chain have "significantly expanded," and the company is confident in meeting customer demand.

Overall, despite the total demand continuing to grow rapidly, the report predicts that global demand for CoWoS will reach 1.154 million wafers in 2026, a 70% increase year-on-year, with the supply side's responsiveness significantly enhanced.

As the chip supply is no longer the biggest challenge, bottlenecks naturally shift downstream. NVIDIA pointed out that current constraints primarily come from the availability of data center space, power, and supporting infrastructure, which have much longer construction cycles than chip manufacturing.

The content from the OCP Conference also confirms this trend. As AI clusters approach "hundred-thousand GPU" scales, the design philosophy of the entire data center is being reshaped:

Power and Cooling: Deploying large-scale GPU clusters presents substantial power consumption and cooling challenges. At the OCP Conference, liquid cooling has become the default configuration for new AI racks, and the demand for power supply solutions such as HVDC 800V is increasing.

This has benefited companies like Aspeed, whose BMC (Baseboard Management Controller) is not only used for servers but has also expanded to include various devices, including cooling systems.

Storage and Memory: AI workloads impose extreme demands on data storage and access speeds. Meta has clarified that its data centers will prioritize QLC NAND flash for cost considerations. Meanwhile, Seagate mentioned that HDDs (hard disk drives) will maintain 95% of their capacity online to meet the needs of large and remote data centers. More critically, the demand for HBM (high-bandwidth memory) is surging, with the report predicting that global consumption of HBM will reach 26 billion GB by 2026, of which NVIDIA alone will consume 54%. This highly concentrated strong demand makes HBM supply a key variable affecting AI server shipments.

Racks and Networks: To facilitate large-scale deployment, the OCP has introduced standardized blueprints such as "AI Open Data Center" and "AI Open Cluster Design," covering racks, liquid cooling, power interfaces, etc. In networking, Alibaba stated that pluggable optics remain the preferred choice due to their total cost of ownership and flexibility, while technologies like LPO (Linear Drive Pluggable Optics) are also gaining attention. CPO/NPO (Co-Packaged/Next-Packaged Optics) is expected to materialize by 2028 as manufacturing processes mature.

Demand forecasts indicate that downstream components are set to see explosive growth. The importance of downstream infrastructure can be verified by demand forecast data. Morgan Stanley analysts expect global capital spending on cloud services to increase by 31% year-on-year in 2026, reaching $582 billion, far exceeding the market's projected 16% growth. Furthermore, if we assume an increase in the proportion of AI servers in capital expenditures, this implies that capital spending on AI servers could achieve about a 70% year-on-year growth in 2026.

From the demand side, major AI giants continue to stockpile aggressively. The report breaks down the AI chip demand for 2026:

CoWoS Capacity Consumption: NVIDIA is expected to capture 59% of the market share, followed by Broadcom (18%), AMD (9%), and AWS (6%). AI Computing Wafer Consumption: NVIDIA leads with a 55% share, followed by Google (22%), AMD (6%), and AWS (6%).

In summary, the signals sent from the OCP Conference, combined with industry data, clearly indicate a new direction for AI hardware investments. With the capacity bottlenecks in chip manufacturing and packaging gradually alleviated, the market focus will inevitably shift to the infrastructures that support large-scale AI computing. The report suggests that for investors, this means broadening their perspective from individual chip companies to the entire data center ecosystem, seeking "picks and shovels" with core competencies in downstream areas such as power, cooling, storage, memory, and networking.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10