Despite some recent momentum over the past few days, Nvidia NVDA is still battling negative market conditions. Halfway through this week, the artificial intelligence (AI) leader is back to struggling and isn’t showing signs of a rebound.
Right now, Nvidia is facing new complications, as new environmental curbs from the Chinese government threaten its sales in a booming AI chip market. But even after the company unveiled multiple new innovations at Nvidia GTC (Global Technology Conference) 2025 last week, shares remain in the red for the month.
Granted, the market is highly volatile right now, as high economic uncertainty, spurred by recent tariffs, continues to fuel talk of a bear market.
Wall Street optimism towards Nvidia remains generally high. However, one expert predicts that things are about to get worse.
Even in a period of high volatility, it’s typically hard to find too many experts who aren’t optimistic about Nvidia’s future. After all, the company has ridden the AI boom to unprecedented heights, helping usher in a new era for the tech sector.
In addition to its broad share of the AI chip market, Nvidia is expanding into quantum computing at a time when the technology is making notable strides. IonQ chairman Peter Chapman recently stated that he believes Nvidia’s quantum exposure is a reason not to bet against it, given the potential for a profitable intersection of quantum and AI.
Another tech leader isn’t so convinced, though. Tory Green is CEO of GPU (graphics processing unit)- power aggregator io.net, and he has some strong concerns about Nvidia’s future, as he illustrates in an unflattering analogy.
Green shared his contrarian take on Nvidia with TheStreet, noting that while Nvidia’s “flashy” performance at last week’s conference might have reassured some investors, the company is still facing much bigger challenges and is likely to become the IBM of this market cycle, a highly negative aspect in the tech world.
“Its $30,000 GPUs are not the future of AI - they are a luxury solution for a small slice of workloads,” he states. “Over the long term, if decentralization continues to gain momentum, Nvidia risks becoming the IBM of this cycle - dominant in the early stages, but outpaced by more flexible architecture. The upside lies with those who can unlock and route IO computability and not just sell it.”
At first glance, this analogy may be confusing, as IBM stock currently trades at a higher price than Nvidia. It’s also outperformed it over the past six months, rising 13%, while Nvidia has fallen 7%.
From Green’s perspective, though, becoming the next IBM is something tech companies should strive to avoid. An early giant in the computing industry, IBM quickly rose to the top of its field but failed to keep pace with newer companies such as Apple AAPL and Microsoft MSFT, which quickly outmaneuvered it to monopolize the changing tech market.
Now, Green sees Nvidia as in danger of falling into the same trap despite its reputation as the seemingly invincible AI leader.
“It's not a critique of IBM's performance, it's a warning of inertia,” he says of his thesis. “In this analogy, this would mean that while NVIDIA today is dominant in centralized AI infrastructure, there's a risk that it becomes too tied to one model of distribution—data centers, hyperscalers, long-term contracts—while the world shifts to more distributed, permissionless infrastructure.”
There’s no doubt that Nvidia is facing a complicated industry landscape, even as the AI market boom continues. Demand for AI chips is rising, but so is competition from other companies. The rise of Chinese AI startup DeepSeek’s R1 model in January 2025 led to speculation that AI models didn’t need to be trained on Nvidia’s most recent, highly-priced GPUs.
NVDA still hasn’t fully recovered from the selloff that DeepSeek’s release triggered and economic uncertainty has only increased since then. Green sees the high cost of Nvidia’s chips working against it shortly, which may compromise its growth prospects.
“As former Intel INTC CEO Pat Gelsinger highlights, we are also overusing high-end GPUs for tasks that do not need them,” he notes. “Most AI inference workloads don’t require H100s, for example – they can run on far cheaper and more available hardware. And this highlights a huge inefficiency in the current AI stack. It’s massive overkill on AI hardware for lightweight jobs.”
As Green sees it, the future of AI will come down to cost-effective solutions that will enable more companies to scale their operations. “If we want scalable, affordable AI, we can’t run it on these $30,000 GPUs. We have to find cheaper and more efficient alternatives,” he summarizes.
免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。