Meta Platforms is expanding its business with Nvidia in a chips-and-networking deal likely to be worth tens of billions of dollars. It's a reminder the chip maker is still the leading player in artificial-intelligence processors.
Meta will use millions of Nvidia's Blackwell and Rubin graphics-processing units. It is also planning a large-scale deployment of Nvidia's central-processing units.
The deal is particularly important as Meta was reported to be one of the major companies looking at using Google's Tensor Processing Units, designed in collaboration with Broadcom, for its future AI needs. The competitive threat from Google's TPUs has been one of the reasons Nvidia stock has flatlined in recent months.
"We're excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world," said Meta CEO Mark Zuckerberg, in a statement late Tuesday.
The agreement should reduce concerns about Broadcom and other rivals eating into Nvidia's market share as AI chip workloads move toward inference -- the process of running AI models -- as opposed to training. Nvidia said the deployment of its CPUs will improve performance per watt in Meta's data centers, a key efficiency metric for inference.
The companies didn't disclose financial details of the expanded partnership and Nvidia doesn't give individual pricing for chips. However, HSBC estimates a GB300 NVL72 rack containing 72 of Nvidia's GPUs costs $3 million and Susquehanna analyst Christopher Rolland estimated this week that the coming Rubin hardware will be sold at an average selling price of 40% higher. Meta said it would deploy GB300-based systems.
Meta is also tying itself closer to Nvidia with the integration of its Spectrum-X Ethernet switches. Shares of Arista Networks -- which counts Meta as one of its major clients -- fell 3% in premarket trading on Wednesday.