Chip Market Share Over 80% Still Not Enough? Nvidia Spends $26 Billion to Seize AI Model Influence

TradingKey
昨天

TradingKey - On March 12 local time, NVIDIA ( NVDA) submitted financial filings to the U.S. SEC that have sent shockwaves through the global tech community. The industry giant, which commands over 80% of the global AI chip market share, announced that it will invest a cumulative $26 billion over the next five years in the research and development of open-source large AI models—a scale more than eight times the cost of training OpenAI's GPT-4.

With this unprecedented investment, NVIDIA has officially embarked on a strategic transformation from a "chipmaker" to a "top-tier full-stack AI laboratory," directly challenging leading players in the model space such as OpenAI and DeepSeek.

Unlike R&D investment in a single model, NVIDIA’s $26 billion will cover the entire open-source large AI model industry chain. The funds will be deployed incrementally over the next 18 to 24 months, with the first batch of self-developed open-source AI models expected to hit the market as early as late 2026 or early 2027.

Currently, NVIDIA has completed technical validation, and pre-training for a massive model with 550 billion parameters has been secretly finalized, accumulating critical technical experience for subsequent open-source model development. According to the plan, NVIDIA will focus on developing cutting-edge multimodal models covering multiple fields such as language, code, scientific computing, and autonomous agents, building a comprehensive model matrix.

In terms of its technical roadmap, NVIDIA has avoided OpenAI's fully closed-source model and the Meta Llama series' fully open-source approach, opting instead for a middle path of "open weights."

By making key model parameters (weights) public, it allows enterprises and developers to download them for free to run and fine-tune on their own devices or private clouds, fully addressing core corporate needs for data privacy, customization, and cost control; however, the model's training data and underlying code may remain non-public.

This model precisely addresses current market pain points. Core models from leading U.S. companies like OpenAI and Anthropic use closed-source models, providing only cloud-based access; Meta has also hinted at potentially tightening its open-source strategy in the future; meanwhile, Chinese companies such as DeepSeek and Alibaba have attracted a large number of global developers through free open-source strategies.

NVIDIA's "open weights" models balance the privacy needs of enterprise users while binding developer communities through an open ecosystem.

Release of Nemotron 3 Super

Coinciding with the massive investment plan, NVIDIA recently introduced its next-generation open-source large language model, Nemotron 3 Super. Designed specifically for enterprise-grade multi-agent systems, this model features 120 billion total parameters and utilizes an efficient Mixture of Experts (MoE) architecture. It natively supports a massive 1-million-token context window, capable of processing an entire novel or thousands of pages of financial reports at once, effectively solving industry challenges such as "context explosion" and "goal drift" in multi-agent workflows.

In performance testing, Nemotron 3 Super delivered an impressive showing, scoring 37 in the Artificial Intelligence Index composite rating, surpassing the 33 scored by OpenAI’s open-source model GPT-OSS. It also ranked first in the PinchBench test, which specifically evaluates OpenClaw control capabilities.

Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, revealed that the company is making significant progress in the field of open-source model development and recently completed the pre-training of a 550-billion-parameter model.

From Defining Hardware to Defining AI Standards

For a long time, NVIDIA has held absolute dominance in the AI chip sector, but discourse power at the AI model layer has been held by vendors such as OpenAI and Meta ( META ).

NVIDIA’s move to develop top-tier open-source models internally is a core strategic play to define the technical roadmap of AI models from the ground up, ensuring its own hardware architecture and software stack become the de facto standards for the entire AI industry.

Kari Briski, Vice President of Enterprise Generative AI Software at NVIDIA, stated that developing cutting-edge models is not just about testing computing power, but also about conducting extreme stress tests on storage, networking, and supercomputer-class data centers to guide the roadmap for next-generation hardware architectures. This "hardware-model" dual-wheel strategy will further solidify NVIDIA’s core position in the AI ecosystem.

Financial analysts are generally optimistic about NVIDIA’s strategic transformation, believing it will open up a new growth curve for the company. If NVIDIA can secure a 10% share of the foundation model market while consolidating its dominance in hardware, it could potentially contribute an additional $50 billion in annual revenue within three years.

For NVIDIA, this $26 billion gamble is both a testament to its confidence in its technical prowess and a precise assessment of future trends in the AI industry. As the chip giant begins to define AI model standards, the landscape of the global AI industry may be poised for a new round of restructuring.

Find out more

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10