Nvidia (NVDA 4.26%) doesn't disclose its exact customer list. However, the company is the leading supplier of data center chips for artificial intelligence (AI) workloads, and we know Amazon (AMZN -0.43%), Microsoft (MSFT 0.44%), Alphabet (GOOG 2.42%) (GOOGL 2.40%), Meta Platforms, and Oracle are some of the biggest buyers of that hardware based on their public filings.
Those data center chips are called graphics processing units (GPUs), and you won't believe the eye-popping amount of money their customers are spending on these chips. Read on.
Image source: Getty Images.
There are two primary AI workloads: Training, which involves feeding truckloads of data into AI models to make them "smarter," and inference, which is the process by which AI models use that data to formulate responses or make predictions. These two workloads are processed in enormous data centers filled with thousands of GPUs, which cost billions of dollars to build.
Companies like Amazon, Microsoft, Alphabet, and Meta have the financial resources to build AI infrastructure to develop their own AI models. Amazon, Microsoft, Alphabet, and Oracle also build AI data centers and rent out the computing power to smaller developers for profit, which has become a very lucrative business practice because demand for capacity continues to far exceed supply.
Here's how much money the top AI companies are spending on AI data centers overall at the moment, based on their public filings:
That list doesn't include companies like OpenAI, Anthropic, and Elon Musk's xAI, which are privately held, so they don't openly disclose as much information.
Not all of that money will flow to Nvidia, but each of the above companies has publicly acknowledged its commercial relationship with the chip giant. Nvidia's GPUs and networking equipment account for a large chunk of AI data center spending, but other major costs include land acquisition, construction, power infrastructure, cooling systems, and staffing.
Competition is also a factor -- Oracle, Meta, and Microsoft have purchased GPUs from Advanced Micro Devices as well, and some hyperscalers like Alphabet are working with Broadcom to design their own chips.
Nvidia's Hopper GPU architecture was at the foundation of the most powerful AI chips in 2023 and for most of 2024, including the popular H100. But in the two years since H100 sales ramped up, Nvidia has launched two new architectures called Blackwell and Blackwell Ultra. The latter platform can deliver a staggering 50 times more performance than Hopper in certain GPU configurations.
Blackwell Ultra GPUs like the GB300 are designed for the current crop of AI "reasoning" models, which spend more time thinking in the background before generating responses compared to traditional large language models (LLMs), leading to more accurate outputs. Nvidia CEO Jensen Huang says some reasoning models require between 100 times and 1,000 times more computing power than their predecessors, which should support growing demand for GPUs and other hardware for years to come.
Nvidia plans to launch another new architecture called Rubin next year, which could be 3.3 times faster than Blackwell Ultra, translating to a staggering performance improvement of 165 times over Hopper.
The constant hunt for more computing power is the reason Huang predicts AI data center spending will top $1 trillion by calendar year 2028, which should support continued growth for Nvidia's business. As a result, Nvidia stock might be a great buy right now, even though it's trading near a record high.
免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。