Nvidia and Broadcom are increasingly being drawn into direct competition as artificial-intelligence companies look for the most cost-efficient way to train and run AI models. Analysts at UBS think demand is growing quickly for Broadcom's processors as an alternative.
Nvidia has enjoyed several years as the dominant provider of AI chips due to its strength in graphics-processing units (GPUs). However, the growth of Google's Tensor Processing Units (TPUs) -- which Broadcom helped design -- has brought the biggest competitive challenge yet.
The major change in the market this year is the potential ramping up of TPU sales to external clients. AI start-up Anthropic has made two major orders for them, totaling $21 billion, and social-media company Meta Platforms is also in talks to use the processors, according to The Wall Street Journal.
"Many have turned to TPU as an intermediate alternative to GPU and we believe demand is accelerating significantly," wrote UBS analyst Timothy Arcuri in a research note this week.
Arcuri forecasts that Broadcom will ship around 3.7 million TPUs this year, rising to more than five million in 2027. That would contribute to Broadcom generating AI revenue of around $60 billion in 2026, rising to $106 billion in 2027.
By comparison, Nvidia is expected to generate around $300 billion in data-center sales for its fiscal 2027 year, ending in January next year, largely due to GPU sales, according to FactSet.
The average selling price of Google and Broadcom's TPUs is set to be between $10,500 and $15,000, headed for around $20,000 in the next few years, according to the UBS analyst. Nvidia doesn't disclose the individual price of its chips but analysts generally put the cost of its latest Blackwell chips at between $40,000 and $50,000 per unit.
That can make TPUs more attractive for inference -- the process of generating answers or results from AI -- although Nvidia continues to hold an advantage when it comes to training AI models.
"According to benchmarks, the latest Ironwood TPU performance is comparable to [Nvidia's] GB300 for inference, but is 1/2 of that in training. Anecdotally, a model that could be trained in 35-50 days on latest NVDA GPUs would take 3 months of training on TPUs," wrote Arcuri.
Analysts at Mizuho estimate that currently between 20% and 40% of AI workloads are dedicated to inference, and that will grow to between 60% and 80% over the next five years.
However, Nvidia could potentially strike back in the inference market with the use of technology from AI hardware start-up Groq. Nvidia recently agreed to purchase a nonexclusive license for technology from privately held Groq, which specializes in inference hardware.
Nvidia paid $20 billion for Groq's technology, including compensation packages for many of the company's employees who joined Nvidia, The Wall Street Journal reported.