"AI Loop" Goes Viral During Holidays! A Comprehensive Guide to North American Data Center Supply Chain

Deep News
2025/10/08

The essence of the artificial intelligence race is fundamentally a competition over physical infrastructure. Behind every seamless AI interaction on screen lies tens of thousands of servers operating at high speed within data centers, all supported by a trillion-dollar physical industry expanding at an astounding pace—the data center sector.

According to Bank of America (BofA) estimates, global data center capital expenditure surpassed $400 billion in 2024 and is projected to reach $506 billion in 2025, comprising $418 billion in IT equipment spending and $88 billion in infrastructure expenditure. Driven by AI demand, this market is expected to expand at a remarkable 23% compound annual growth rate (CAGR) from 2024 to 2028, ultimately forming a massive market exceeding $900 billion by 2028.

So where does the real value chain distribution lie in this unprecedented construction boom? Who will emerge as the biggest beneficiaries?

This article will provide an in-depth analysis of the complete data center market landscape ignited by AI, reveal the core logic of technological transformation, and systematically dissect its complex supply chain to present a comprehensive panorama of the North American data center supply chain and identify the true "shovel sellers" in this gold rush.

**I. $500 Billion Market Overview**

Data center market growth is no longer driven by traditional enterprise self-built facilities. Since 2017, when cloud service providers and colocation companies first surpassed enterprise-owned data centers in total capacity, virtually all new capacity has come from two types of players: "Hyperscale" cloud service providers represented by Amazon AWS and Microsoft Azure, and "Colocation" companies that provide leasing services to them or other clients.

From a global capacity distribution perspective, the Americas region accounts for over half of global power capacity. Among these, Northern Virginia on the US East Coast, with nearly 15% of global hyperscale data center capacity, has become the undisputed largest single concentration globally. Following closely is Beijing, China, accounting for approximately 7%.

What drives continuous capital influx is the clear and attractive return model of data centers as high-value infrastructure assets. Taking a typical wholesale colocation new construction project as an example, the unit investment economics are as follows:

**Initial Investment:** Building a 1-megawatt (MW) capacity data center requires approximately $2 million in upfront costs for land and power access, while the "powered shell" including buildings, mechanical and electrical systems, and cooling facilities costs about $11 million. The total investment per megawatt is approximately $13 million.

**Revenue and Profitability:** Each megawatt can generate $2-3 million in annual rental income. After deducting operating costs including electricity (US industrial electricity averages about $0.08 per kWh), labor (approximately 2 full-time employees per megawatt), and property taxes (about 1% of property value), EBITDA margins typically reach a robust 40-50%.

**Investment Returns:** In a typical 20-year holding period model combined with project financing (assuming 46% loan-to-value ratio, 6% debt rate, 10% equity cost), the project's internal rate of return (IRR) can reach 11.0%. This is extremely attractive for infrastructure investors seeking long-term stable cash flows.

This high-certainty business model forms the financial foundation for the entire data center industry expansion.

**II. Technological Singularity: The "Density Revolution" from Chips to Racks**

All infrastructure transformations within data centers originate from AI chips. The core evolution logic can be precisely summarized as a "density revolution."

The revolution's source is the exponential surge in single-chip power consumption. From NVIDIA's first-generation Volta architecture to today's Blackwell architecture, single GPU power consumption has increased fourfold in just a few years. The underlying physical law is simple yet brutal: integrating more transistors on chips and running them at higher clock frequencies inevitably leads to linear power consumption growth.

The direct chain reaction is the sharp rise in server rack power density. In AI training clusters, network latency is a critical performance bottleneck. To minimize data transmission delays between GPUs, engineers' solution is to integrate as many GPUs as possible within the same server rack, communicating through high-speed internal interconnects (such as NVLink). This architectural optimization inevitably results in explosive growth in rack power density. In 2021, average data center rack density was less than 10 kilowatts (kW); today, a standard NVIDIA Hopper (H200) rack consumes 35kW, while the latest Blackwell (B200) rack reaches 120kW. According to NVIDIA's published roadmap, its Rubin Ultra platform planned for late 2027 will achieve unprecedented single-rack power consumption of 600kW. AMD's MI350 and future MI400, along with Intel's Gaudi series, follow the same trajectory.

Meanwhile, global existing data center infrastructure construction severely lags behind. According to Uptime Institute's 2024 survey, only 5% of existing data centers worldwide have average rack densities exceeding 30kW. This means 95% of data centers cannot even support NVIDIA's previous-generation Hopper chips, let alone higher-power Blackwell. Therefore, AI computing deployment must rely on massive upgrades to existing data centers and extensive new construction.

Notably, this isn't exclusive to GPUs. Cloud giants including Google (TPU), Microsoft (Maia 100), and Amazon (Trainium) have all announced that their latest-generation custom ASIC chips, despite higher efficiency for specific tasks, must adopt liquid cooling to pursue ultimate performance. This confirms from another angle that cooling challenges from high-density computing may be an irreversible common trend across the entire AI hardware industry.

**III. Infrastructure Transformation: A Revolution of "Water and Power"**

The "density revolution" ignited by chips is launching a bottom-up assault on data center infrastructure, with core battlegrounds concentrated in two areas: cooling systems (water) and power systems (electricity).

**(1) First Battlefield: Cooling—Migration from Air to Liquid**

Traditional data centers have long relied on air cooling. However, even the most optimized air cooling systems have physical limits of only about 60-70kW per rack. Facing AI racks consuming hundreds of kilowatts, air cooling is powerless. Liquid cooling, a technology that briefly appeared in mainframe computer era, is returning to mainstream with undisputable momentum.

Among various liquid cooling technology routes, the current industry mainstream choice is "Direct-to-Chip" (D2C). This technology uses a metal "cold plate" containing microchannels directly covering GPUs, CPUs, and other major heat-generating chips, with internal coolant flow (typically water and ethylene glycol mixture) efficiently removing heat. The system's core equipment is the "Coolant Distribution Unit" (CDU), responsible for driving coolant circulation between the server's internal "secondary loop" and the data center's external "primary loop" for heat exchange.

The CDU market, though only about $1.2 billion in 2024, is experiencing explosive growth. Currently, over 30 suppliers exist in the market. However, data center operators' extreme pursuit of "uptime" makes them extremely conservative, preferring suppliers with mature technology and reliable service. This gives established manufacturers with mature product lines and global service networks natural moats. Vertiv (strengthened through 2023 CoolTera acquisition), Schneider Electric (positioned through February 2025 Motivair acquisition), Delta Electronics, and nVent are considered first-tier leaders in this field.

**(2) Second Battlefield: Power—Architectural Revolution from AC to High-Voltage DC**

Traditional data center power chains are lengthy and involve energy losses: medium-high voltage AC from the grid passes through transformers for voltage reduction, switchgear distribution, enters uninterruptible power supplies (UPS) for "AC-DC-AC" double conversion for backup, then through power distribution units (PDU) or bus ducts to racks, finally completing the last "AC-DC" conversion via power supply units (PSU) within servers.

As AI rack total power approaches 100kW and beyond, traditional low-voltage AC power architecture drawbacks become apparent: massive current requires extremely thick copper cables, which are not only costly but also occupy precious rack space and affect cooling. Consequently, an architectural revolution toward "High Voltage DC" has begun.

**400V DC solution** first proposed by Microsoft, Meta, and others in the "Open Compute Project."

**800V DC solution** announced by NVIDIA to support future megawatt-scale server racks, planned for 2027 deployment.

High-voltage DC's core advantage lies in physics (power = voltage × current): when transmitting the same power, increasing voltage by one order of magnitude reduces current by one order of magnitude. This means using smaller diameter, lower-cost cables for power transmission, dramatically reducing expensive and bulky copper usage within racks. According to Schneider Electric calculations, 400V systems can reduce copper wire weight by 52% compared to traditional 208V AC systems.

This transformation will profoundly reshape power systems:

**UPS Simplification:** In DC architecture, UPS no longer needs "inverter" components to convert battery DC to AC, theoretically reducing costs by 10-20% (though initially offset by high-voltage safety equipment costs).

**Power Outside Servers:** PSUs originally inside servers occupying significant space will be moved outside server racks, forming independent "power side car" configurations, freeing more space for computing units.

NVIDIA has clearly stated its 800V DC architecture will be developed in collaboration with industry leaders like Vertiv and Eaton, again confirming incumbents' central position in industry standard transformations.

**IV. Core Supply Chain Component Analysis: Who Are the "Shovel Sellers" in the Gold Rush?**

AI's rise is significantly increasing data center unit construction costs. A traditional data center's total all-in cost is approximately $39 million per megawatt, while a next-generation AI architecture data center (assuming chip-level liquid cooling and high-voltage DC) will see costs jump 33% to $52 million per megawatt. The increase mainly comes from more expensive AI servers, but infrastructure upgrades also contribute significant cost increments.

Along this massive and sophisticated supply chain, various segments' "shovel sellers" are sharing in the era's dividends:

**Thermal Systems:** This is approximately a $10 billion market (2024), with Vertiv recognized as the market share leader. Core products include chillers, cooling towers, and computer room air handlers (CRAH). Traditional HVAC giants Johnson Controls (through Silent-Aire acquisition), Carrier, and Trane are also important participants.

**Electrical Systems:** This is approximately an $18 billion market (2024), with Schneider Electric holding the leading position. Its product line covers uninterruptible power supplies (UPS, new installation market about $7 billion), switchgear (market about $50-55 billion), busway and other distribution equipment (market about $42-47 billion). Eaton, ABB, and Siemens are also core players in this field.

**Backup Power:** Diesel generators are the last line of defense ensuring data centers' highest reliability levels. The generator equipment market alone reached approximately $7.2 billion in 2024, with global leader Cummins.

**IT Equipment:** This represents the largest share of data center investment. In 2024, the global server market was approximately $280 billion, with AI servers already accounting for half by value. The network equipment market is approximately $36 billion, mainly dominated by Cisco and Arista.

**Construction & Services:** Transforming data centers from blueprints to reality requires professional engineering design and construction. Engineering design (4.5-6.5% of infrastructure costs) represents about a $4 billion market, with major players including Jacobs Solutions and Fluor. Construction markets are even larger (approximately $65-80 billion, including substantial material and equipment pass-through costs), with participants including international construction giants like Balfour Beatty and Skanska.

**Conclusion**

The "AI loop" and amazing generative capabilities we see on screens represent not only AI technological breakthroughs but also a physical world infrastructure competition involving concrete, copper cables, and coolant.

A highly impactful fact is that within the coming months, global data center construction spending will historically exceed the total construction of all general office buildings.

In this unprecedented gold rush ignited by AI, those supply chain giants mastering core cooling and power technologies, capable of "cooling down" and "feeding power" to massive computing capacity, will undoubtedly become the silent but true winners of this era.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10