Nvidia (NVDA 4.26%) doesn't disclose its exact customer list. However, the company is the leading supplier of data center chips for artificial intelligence (AI) workloads, and we know Amazon (AMZN -0.43%), Microsoft (MSFT 0.44%), Alphabet (GOOG 2.42%) (GOOGL 2.40%), Meta Platforms, and Oracle are some of the biggest buyers of that hardware based on their public filings.
Those data center chips are called graphics processing units (GPUs), and you won't believe the eye-popping amount of money their customers are spending on these chips. Read on.
Image source: Getty Images.
There are two primary AI workloads: Training, which involves feeding truckloads of data into AI models to make them "smarter," and inference, which is the process by which AI models use that data to formulate responses or make predictions. These two workloads are processed in enormous data centers filled with thousands of GPUs, which cost billions of dollars to build.
Companies like Amazon, Microsoft, Alphabet, and Meta have the financial resources to build AI infrastructure to develop their own AI models. Amazon, Microsoft, Alphabet, and Oracle also build AI data centers and rent out the computing power to smaller developers for profit, which has become a very lucrative business practice because demand for capacity continues to far exceed supply.
Here's how much money the top AI companies are spending on AI data centers overall at the moment, based on their public filings:
That list doesn't include companies like OpenAI, Anthropic, and Elon Musk's xAI, which are privately held, so they don't openly disclose as much information.
Not all of that money will flow to Nvidia, but each of the above companies has publicly acknowledged its commercial relationship with the chip giant. Nvidia's GPUs and networking equipment account for a large chunk of AI data center spending, but other major costs include land acquisition, construction, power infrastructure, cooling systems, and staffing.
Competition is also a factor -- Oracle, Meta, and Microsoft have purchased GPUs from Advanced Micro Devices as well, and some hyperscalers like Alphabet are working with Broadcom to design their own chips.
Nvidia's Hopper GPU architecture was at the foundation of the most powerful AI chips in 2023 and for most of 2024, including the popular H100. But in the two years since H100 sales ramped up, Nvidia has launched two new architectures called Blackwell and Blackwell Ultra. The latter platform can deliver a staggering 50 times more performance than Hopper in certain GPU configurations.
Blackwell Ultra GPUs like the GB300 are designed for the current crop of AI "reasoning" models, which spend more time thinking in the background before generating responses compared to traditional large language models (LLMs), leading to more accurate outputs. Nvidia CEO Jensen Huang says some reasoning models require between 100 times and 1,000 times more computing power than their predecessors, which should support growing demand for GPUs and other hardware for years to come.
Nvidia plans to launch another new architecture called Rubin next year, which could be 3.3 times faster than Blackwell Ultra, translating to a staggering performance improvement of 165 times over Hopper.
The constant hunt for more computing power is the reason Huang predicts AI data center spending will top $1 trillion by calendar year 2028, which should support continued growth for Nvidia's business. As a result, Nvidia stock might be a great buy right now, even though it's trading near a record high.
免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。