Why Is Nvidia the King of AI Chips, and Can It Last?

Bloomberg
05-20

After a blistering share rally that for a time made Nvidia Corp. the world’s most valuable company, investors have become more wary of pouring more money into the chipmaker, aware that the adoption of artificial intelligence computing won’t be a straight path, and won’t rely solely on Nvidia technology.

For now, Nvidia remains the preeminent picks-and-shovels seller in an AI gold rush. Revenue is still soaring, and the order book for the company’s Hopper chip lineup and its successor — the Blackwell series — is bulging.

Its continued success hinges on whether Microsoft Corp., Google and other tech giants will find enough commercial uses for AI to turn a profit on their massive investments in Nvidia chips. Even if they do, it’s not clear how many of the company’s most powerful and profitable chips will be required: In January, Chinese startup DeepSeek released an AI model that it said performs as well as those made by large US companies but required far fewer resources to develop.

After DeepSeek published a paper detailing the capabilities of the new model and how it was created, Nvidia’s capitalization slumped by $589 billion, the largest ever loss of market value in a single day. By mid-May, the stock had recovered most of the lost ground.

Here’s a look at what’s been driving Nvidia’s spectacular growth and the challenges ahead.

What are Nvidia’s most popular AI chips?

The current moneymaker is the Hopper H100, the name of which is a nod to computer science pioneer Grace Hopper. It’s a beefier version of a graphics processing unit that originated in personal computers used by video gamers. Hopper is being replaced at the top of the lineup with the Blackwell range, named for mathematician David Blackwell.

Both Hopper and Blackwell include technology that turns clusters of computers that use Nvidia chips into single units that can process vast volumes of data and make computations at high speeds. That makes them a perfect fit for the power-intensive task of training the neural networks that underpin the latest generation of AI products.

Founded in 1993, Nvidia pioneered this market with investments dating back more than a decade, when it bet that the ability to do work in parallel would one day make its chips valuable in applications outside of gaming.

The Santa Clara, California-based company will sell the Blackwell products in a variety of options, including as part of the GB200 superchip, which combines two Blackwell GPUs with one Grace CPU, a general-purpose central processing unit. (The Grace CPU is also named for Grace Hopper.)

Why are Nvidia’s AI chips special?

So-called generative AI platforms learn tasks such as translating text, summarizing reports and synthesizing images by ingesting vast quantities of preexisting material — the more they see, the better they perform. Such platforms develop through trial and error, making billions of attempts to achieve proficiency and sucking up huge amounts of computing power along the way.

Blackwell delivers 2.5 times Hopper’s performance in training AI, according to Nvidia. The new design has so many transistors — the tiny switches that give semiconductors their ability to process information — that it can’t be produced conventionally as a single unit. It’s actually two chips married to each other through a connection that ensures they act seamlessly as one, the company said.

For customers racing to train their AI platforms to perform new tasks, the performance edge offered by the Hopper and Blackwell chips is critical. The components are seen as so key to developing AI that the US government has restricted their sale to China.

How did Nvidia become a leader in AI?

Nvidia was already the king of graphics chips, the components that generate the images you see on a computer screen. The most powerful of those are built with thousands of processing cores that perform multiple simultaneous threads of computation. This allows them to produce the complex 3D renderings like shadows and reflections that are a feature of today’s video games.

Nvidia’s engineers realized in the early 2000s that they could retool these graphics accelerators for other applications. AI researchers, meanwhile, discovered that their work could finally be made practical by using this type of chip.

What are Nvidia’s competitors doing?

Nvidia controls about 90% of the market for data center GPUs, according to the market research firm IDC. Dominant cloud computing providers and major Nvidia customers such as Amazon.com Inc.’s AWS, Alphabet Inc.’s Google Cloud and Microsoft’s Azure are trying to develop their own chips, as are Nvidia rivals Advanced Micro Devices Inc. and Intel Corp.

At the Computex trade show in Taiwan in May, Nvidia signaled a willingness to accommodate the moves by some customers to produce their own key components, with Chief Executive Officer Jensen Huang announcing that Nvidia’s NVLink server backbone — a set of components that act as a high-speed link between the main chips in a computer — will be opened up to products from other companies. Previously that technology had been reserved solely for Nvidia’s own processors and accelerator chips.

However, the alternative chip development efforts have done little to erode Nvidia’s dominance for the time being.

How does Nvidia stay ahead of its competitors?

Nvidia has updated its offerings, including software to support the hardware, at a pace that no other firm has yet been able to match. The company has also devised cluster systems that help its customers buy H100s in bulk and deploy them quickly. Chips like Intel’s Xeon processors are capable of complex data crunching, but they have fewer cores and are slower at working through the mountains of information typically used to train AI software. Intel, the once-dominant provider of data center components, has struggled so far to offer accelerators that customers are prepared to choose over Nvidia equipment.

Jensen HuangJensen Huang

How is AI chip demand holding up?

Huang and his team have said repeatedly that the company has more orders than it can fill, even for older models.

Microsoft, Amazon, Meta Platforms Inc. and Google have announced plans to spend hundreds of billions of dollars collectively on AI and the data centers to support it. More recently, there’s been speculation that the AI data center boom is already losing steam. Microsoft has pulled back on data center projects around the world, raising broader concerns over whether it’s securing more AI computing capacity than it needs in the long term.

Why did Chinese startup DeepSeek cause so much concern?

The release of DeepSeek’s new R1 open-source AI model left competitors scrambling to find out how it achieved results on a par with US rivals while using a fraction of their resources.

DeepSeek fine-tunes its AI model with real world-inputs, an approach known as inference that’s less time-consuming and data-intensive than the artificial training method used by other companies. Nvidia, arguably the company with most to lose, credited DeepSeek’s model as an “excellent AI advancement” — and one that was achieved without flouting US technology export controls.

Those restrictions forbid the export to China of the most advanced GPUs from Nvidia, so its response appeared to allay suspicions among some industry analysts that the Chinese startup couldn’t have made the breakthrough it has claimed.

Still, Nvidia said its chips will have a major role to play even if there’s a shift in the way AI models are built. “Inference requires significant numbers of Nvidia GPUs and high-performance networking,” the company said.

How do AMD and Intel compare with Nvidia in AI chips?

AMD, the second-largest maker of computer graphics chips, unveiled a version of its Instinct line in 2023 aimed at the market that Nvidia’s products dominate. A new, ramped up version, MI350, will be shipped to customers around the middle of the year. Chief Executive Officer Lisa Su said it’ll perform 35 times better than its predecessor.

While AMD has widely been credited as having the best chance of making a dent in Nvidia’s lead, there’s been speculation that it’s struggling to build momentum. The company has forecast that revenue in the first six months of 2025 will be about the same as in the preceding six months. AMD generates more than $5 billion in annual revenue from the accelerator chips that help develop AI models. Nvidia’s sales in this category exceed $100 billion a year.

Intel’s management has told analysts and investors that the company isn’t “participating in the cloud-based AI data center market in a meaningful way.” In January, the company decided not to bring a new AI chip codenamed Falcon Shores to market after it failed to get favorable feedback from prospective customers, and decided to use it for internal testing only.

None of Nvidia’s rivals has yet accounted for the leap forward that the company says Blackwell will deliver. Nvidia’s advantage isn’t just in the performance of its hardware. The company invented something called CUDA, a language for its graphics chips that allows them to be programmed for the type of work that underpins AI applications. Widespread use of that software tool has helped keep the industry tied to Nvidia’s hardware.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10