Jensen Huang Wants To Help Nvidia's Rivals Succeed. Here's Why That Isn't As Crazy As It Seems.

Dow Jones
05-24

'I'd love for everyone to buy everything from me, but I want to make sure everyone buys something from me.'Jensen Huang, Nvidia CEO

Computex, a trade show traditionally focused on consumer technology and new laptops, typically wouldn't be the place to announce big shifts in artificial-intelligence infrastructure and data centers. But this year, Nvidia $(NVDA)$ used the moment to introduce another way for the company to maintain its leadership.

In a somewhat surprising turn at Computex, Nvidia unveiled NVLink Fusion, a program that opens up some of its most powerful system-level technology to outside companies. This move unlocks one of Nvidia's biggest competitive advantages: the tightly integrated hardware ecosystem that has helped the company dominate AI compute.

For years, Nvidia kept its magic mostly in-house. If you wanted the best performance, you needed to buy not just the graphics processing unit, but also the Nvidia-designed central processing unit, networking and software stack. Now, with NVLink Fusion, companies such as Qualcomm $(QCOM)$, MediaTek (TW:2454), Fujitsu (JP:6702) and others can build their own chips that connect directly into Nvidia's platform.

Nvidia would rather share the stage (on its terms) than risk customers building entirely new platforms without any Nvidia technology.

Why make this change? As Nvidia CEO Jensen Huang put it during his keynote at Computex, "I'd love for everyone to buy everything from me, but I want to make sure everyone buys something from me." That's about as candid as it gets. The subtext here is clear: Nvidia would rather share the stage (on its terms) than risk customers building entirely new platforms without any Nvidia technology.

By acknowledging the competitive landscape with a wink and a grin, he's once again showing Nvidia is thinking far ahead. Rather than fight every battle on every front, the company is choosing to shape the battlefield itself, ensuring that even in a more open and collaborative future, Nvidia core technologies remain indispensable.

And there's a lot to gain for those companies on the outside looking in. Qualcomm, for example, has long been a leader in mobile and edge AI with its low-power processors and Snapdragon product line. But breaking into the data center has been tough. With NVLink Fusion, Qualcomm can integrate its Oryon-based CPUs and AI accelerators directly with Nvidia's GPUs. That could allow the company to offer an alternative to Nvidia's own Grace CPU in some configurations.

The same goes for MediaTek. Known for smartphones, smart TVs, and ongoing partnership with Nvidia on the client side, MediaTek could step into the data center space by developing custom silicon that works with NVLink. It's a bold move, and NVLink Fusion gives it a head start it wouldn't otherwise have.

Fujitsu is also joining the effort, planning to use its new Monaka processor, based on 2-nanometer Arm $(ARM)$ technology, in sovereign AI infrastructure for Japan. Once again, the draw here is plugging into Nvidia's GPU ecosystem without having to build the rest of the system from scratch.

Nvidia is not giving up control. It's finding a way to extend its influence.

Why is this all happening now? The answer might lie halfway around the world. Around the same time as the NVLink Fusion reveal, a Saudi Arabian company called Humain announced partnerships with Nvidia, AMD $(AMD)$ and Qualcomm. Humain is planning massive AI data centers, including one 500-megawatt facility that will use 18,000 Nvidia Grace Blackwell chips.

At first glance, this looks like a huge win for Nvidia. And it is. But it's also significant that Humain is spreading its investments. AMD signed a $10 billion, five-year deal to supply its own CPUs and GPUs. Qualcomm is also involved, bringing its Snapdragon and Dragonwing processors into AI inference centers. The message is clear: These deployments are looking for variety and flexibility.

This broader strategy reflects a new reality where the demand for AI infrastructure is booming, but building it is more complicated than ever. Between geopolitics, export controls, and a growing set of use cases like inference and sovereign AI, organizations want to work with a mix of technologies. They might choose Nvidia for GPUs, but opt for Qualcomm CPUs or AMD accelerators depending on the task.

That's why NVLink Fusion matters. It's not just a technical upgrade for the industry. It's Nvidia recognizing that it likely can't keep the ecosystem completely locked down. If it wants to remain the top central player in the AI world, it has to give other companies a path in, while keeping a firm grip on the core technologies it values most.

Of course, there are still plenty of questions. How much control will Nvidia keep over its software and networking stack? Will Intel $(INTC)$ or AMD respond with similar strategies, or join in? How many of these partnerships will actually turn into real deployments?

Right now, Nvidia is positioning itself like Intel did in the early PC era, as the company whose platform guides the direction of the industry. But unlike that time, the AI hardware space is far more fragmented. There are many players, each with their own accelerators, custom silicon and national strategies.

By opening NVLink Fusion, Nvidia is not giving up control. It's finding a way to extend its influence. The company is making sure that even if someone else builds the CPU or accelerator, Nvidia still owns the interconnect, the GPU and the software stack.

For Qualcomm and Fujitsu, this is the clearest opening yet. It lets them leap into the AI infrastructure game without having to re-create the entire system architecture from scratch.

Nvidia's moat is still wide, but now the company has lowered a drawbridge and is inviting others to cross - on its terms.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10