Elon Musk recently stated that Taiwan Semiconductor Manufacturing's concerns about a chip oversupply are "correct." He predicted that the limiting factor for the AI industry will shift from chip manufacturing to "getting the chips to run," with the core bottlenecks being power supply, transformer configuration, and the deployment of cooling systems.
On January 6, Musk engaged in an in-depth discussion about the future of AI with Singularity University Executive Chairman Peter Diamandis and Link Ventures founder David Blundin. He pointed out that while chip production capacity is growing exponentially, the supporting energy infrastructure required to operate them is only expanding linearly. When these two curves intersect, a vast number of high-performance AI chips will be unable to be utilized due to a lack of accompanying power conversion equipment and cooling systems.
This assessment directly highlights the severe underestimation of current AI infrastructure construction. For investors, this signifies that the focus of the AI computing race is shifting from chip procurement to the capacity for energy infrastructure development.
In the conversation, Musk further detailed the specific nature of the power bottleneck in AI infrastructure. He emphasized that deploying AI chips is far from as simple as "delivering GPUs to a power plant"; it requires simultaneously solving three core problems: gigawatt-level power supply, high-voltage electricity conversion, and efficient heat dissipation systems.
Musk specifically noted that the entire data center industry is undergoing a critical transition from air cooling to liquid cooling and warned that this process carries significant risks. He stated:
Musk used the example of xAI's "Giga Pod 2" project in Memphis to illustrate the practical challenges: although the site is located adjacent to multiple 300-kilovolt high-voltage lines, completing the connection will still take approximately a year. To meet the deadline for bringing a 1-gigawatt training cluster online by mid-January 2025, the team had to temporarily assemble multiple gas turbines ranging from 10 to 50 megawatts as an interim power source, relying heavily on Megapack battery arrays for power leveling.
When asked if he agreed with TSMC's concerns about oversupply, Musk affirmed: "I'm not sure of their reasoning, but the conclusion is correct."
He pointed out that the key is to identify the "limiting factor at each point in time" and predicted that by the third quarter of 2026 (approximately 9-12 months from now), the core bottleneck will shift from chip manufacturing to the ability to "get the chips running."
This judgment stems from a misalignment in two development trajectories: AI chip production capacity is expanding at an exponential rate, while the power infrastructure needed to run them can only grow linearly. Musk stressed: "If chip output is growing exponentially and power supply is only increasing slowly and linearly, the two curves will inevitably cross." This means chips could be manufactured far faster than they can actually be deployed, connected to power, and operated.
In response, David Blundin offered a differing perspective, suggesting that even if TSMC increased GPU production from 20 million to 40 million units, the market would find a way to solve the power supply issue. However, Musk maintained that any deficiency in power conversion or cooling systems would prevent chips from being truly activated, thereby fundamentally curbing actual demand and purchasing behavior.