Nvidia CEO Huang Says Next Generation of Chips Is in Full Production

Reuters
01/06
  • Nvidia's Vera Rubin platform to debut with 72 graphics processing units, 36 central processor

  • Nvidia faces competition from AMD and Google in AI chip market

  • Nvidia's new chips use proprietary data for performance boost

NVIDIA CEO Jensen Huang said on Monday that the company’s next generation of chips is in “full production,” saying they can deliver five times the artificial-intelligence computing of the company’s previous chips when serving up chatbots and other AI apps.

In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world's most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives told Reuters are already in the company's labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers.

The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship device containing 72 of the company’s flagship graphics units and 36 of its new central processors. Huang showed how they can be strung together into "pods" with more than 1,000 Rubin chips.

To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt.

"This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors," Huang said.

While Nvidia still dominates the market for training AI models, it faces far more competition - from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet's Google - in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies.

Much of Huang’s speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called “context memory storage” aimed at helping chatbots provide snappier responses to long questions and conversations when being used by millions of users at once.

Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems.

In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take - and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations.

"Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be," Huang said from a stage in Las Vegas.

Last month, Nvidia scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold.

At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which U.S. President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current flagship "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the U.S. political spectrum.

應版權方要求,你需要登入查看該內容

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10