Inside the gilded reception hall across San Francisco's War Memorial Opera House, some of Silicon Valley's top executives and investors in late November gathered to toast the man of the hour.
Jensen Huang, Nvidia's globe-trotting CEO, had just donated $5 million to fund a production of a new opera, "The Monkey King," based on a classic Chinese heroic novel, and invited a who's-who of the AI industry to attend a performance.
"Champagne and opera," Huang joked in a speech to more than 100 guests from OpenAI and other companies. "This is what it's like to be rich!"
The confab of Silicon Valley royalty was about more than just opera -- it was about paying respect to an industry kingmaker whose company had become so flush with cash that it was bankrolling virtually everyone in the room.
Thanks to the astronomical demand for its chips prompted by the global AI boom, Nvidia generates more profit than almost any other public company on the planet. The chip giant has used its fast-growing war chest to become the industry's most powerful financier, investing tens of billions of dollars in promising startups and supporting key customers who would otherwise struggle to afford its chips.
Nvidia says that the deals grow the broader AI ecosystem, and they indeed provide crucial financial backing for companies crushed by the high costs of building the technology. But they also have another effect: keeping customers hooked on Nvidia's products and steering them away from rival chip providers.
Nvidia's investments don't come with explicit requirements to use the money for its technology. Yet some companies are so dependent on its financial support that they have essentially ruled out the possibility of using non-Nvidia chips, even if Nvidia doesn't prohibit them from doing so.
The chip giant has also outbid competitors for key acquisitions, offering prices that are difficult for others to match. And the company has found clever ways to skirt regulatory review for larger deals, allowing it to quickly snatch technology and talent away from smaller rivals.
All this financial muscle-flexing has allowed Nvidia to maintain its iron grip on the market. Nvidia blew past expectations in its latest quarter, reporting record sales of $68 billion. Its gross margin hit 75% -- an astonishing level of profit for a company whose customers lose tens of billions of dollars a year on AI.
"It's totally unprecedented for one company to be so active across the board," said Paul Kedrosky, a venture capitalist and fellow at the Massachusetts Institute of Technology's Initiative on the Digital Economy. "You really see the centrality of Nvidia to the financial moment, this capex frenzy. They sit in this strange position of being supplier, investor and creditor."
An Nvidia spokesperson said that the company invests in the AI ecosystem so that developers and customers can innovate faster, and that its products are chosen for their superior performance. Some large customers also note that it can be costly and difficult to switch over to other suppliers, giving them less incentive to diversify.
This account is based on interviews with several dozen executives, investors and advisers involved in Nvidia's AI dealmaking.
A mystery guest
"The Monkey King" reception was a personal affair for Huang. He came with his wife of more than 40 years, Lori, and the evening's festivities were organized by his daughter Madison, who joined Nvidia in 2020 after several years serving as a gourmet chef.
Huang bounced around the grand chamber in his signature black leather jacket, standing out among the tuxedoed crowd. But the real surprise, as guests filed into the hall, was spotting a rival chip CEO in attendance as well.
A quiet, stern-faced engineer who once worked at Google, Jonathan Ross was the chief executive of Groq, which sold a chip called the LPU. The startup hadn't caught Nvidia's attention for most of its nine-year existence -- until last year, when the AI industry underwent a sea change that breathed new life into the chip giant's competitors.
That summer, Silicon Valley had been electrified by the rise of coding agents that could build software programs from scratch. OpenAI's coding tool, Codex, became one of the startup's buzziest products. Yet engineers at the ChatGPT-maker were running into a problem: Nvidia's chips weren't powering the product quickly enough, frustrating users with long wait times.
OpenAI was one of Nvidia's largest customers. But by the time its president Greg Brockman arrived at the opera reception with his wife, Anna, he was already looking elsewhere. The startup was about to sign a deal to use chips designed by Cerebras, which are faster and more efficient than Nvidia GPUs in some instances.
Like Cerebras, Groq makes a chip designed to respond more quickly to AI queries. OpenAI had also discussed a partnership with Groq earlier that year, according to people familiar with the talks.
What the assembled crowd didn't know was that Ross, its CEO, was already in talks to join Huang at Nvidia.
Shortly after, Nvidia signed a nonexclusive license to use Groq's chip design, hiring away Ross and his top staff. The structure of the transaction allowed the chip giant to poach Groq's talent and technology without having to buy it.
Huang initially offered to pay around $10 billion. Groq countered with roughly $30 billion -- a staggering figure that would have more than quadrupled the startup's valuation, people familiar with the matter said.
Nvidia ended up paying $20 billion to seal the deal, more than it had ever spent on any single acquisition. Groq's staff learned that their CEO and other top leaders were decamping for the chip giant on Christmas Eve.
Nvidia announced a new chip powered by Groq's technology at its annual developer conference in March that it says will run AI models quickly. OpenAI is set to be one of the first customers of the chip, The Wall Street Journal reported.
On Thursday, Democratic Senators Elizabeth Warren and Richard Blumenthal sent Huang a letter requesting additional information on the Groq deal, writing that the transaction "appears to be structured to evade scrutiny by antitrust regulators" and expressing concern that it could stifle competition.
Nvidia said Groq continues to be a separate and independent business.
An industry snub
The night of the opera, Advanced Micro Devices CEO Lisa Su took to the stage some 50 miles south in a ballroom at the Hilton San Jose, where she delivered the keynote speech for the chip world's most prestigious annual celebration. Huang prerecorded a video greeting that was shown at the dinner -- but he skipped the chance to show up in person.
It wasn't the first time that Huang had snubbed Su, who is also his first cousin, once removed.
AMD is becoming a popular alternative to Nvidia, offering cheaper chips that are quickly growing in technical sophistication. But it is running into a bigger problem: it can't innovate its way around Nvidia's financial might.
Last year, Nvidia beat AMD to acquire two small startups. In the case of one, CentML, Nvidia roughly doubled AMD's offer after learning about it, people familiar with the matter said.
AMD is also battling Nvidia to win over a new crop of startups trying to discover the next big AI breakthrough. The leaders of these so-called "neolabs" have become more open to using AMD's chips, but in an industry as expensive as AI, Nvidia's pocketbook can make a big difference.
Eiso Kant, another guest of Huang's at the opera reception, is a three-time former startup founder who lives outside of Lisbon, Portugal, where he likes to go sailing along the rugged coastline. He is the co-CEO of Poolside, an AI startup trying to build autonomous software-writing bots for businesses.
Last year, Kant entered into talks with both AMD and Nvidia for a new computing deal. Both companies offered him the same trade: access to its latest-generation chips, alongside an investment to help the company pay for them.
Nvidia offered to invest $500 million upfront, and up to $1 billion if the company met future fundraising targets. AMD could only offer $250 million.
Kant chose Nvidia. In October, Poolside announced a deal to use more than 40,000 Nvidia GPUs in a Texas data center set to be operated by CoreWeave, a cloud company whose biggest investor is Nvidia.
Total loyalty
CoreWeave is part of a new generation of so-called "neoclouds" that compete with cloud giants like Google and Amazon, which also develop and sell their own AI chips.
Nvidia is one of CoreWeave's largest outside shareholders, and regularly gives the company early access to its latest chips. Nvidia recently agreed to buy back up to $6.3 billion worth of its chips if CoreWeave is unable to lease them to customers by 2032 -- a guarantee that makes it easier for CoreWeave to raise debt to buy the GPUs. In return, CoreWeave is loyal to Nvidia.
CoreWeave executives and advisers have privately indicated to other chip companies that they are reluctant to use non-Nvidia chips for fear of upsetting their benefactor, according to people who have heard the remarks.
A CoreWeave spokeswoman disputed this characterization and said that the company "operates independently and our partner decisions are driven by customer demand and technical performance for the most versatile GPU technology available." She added that CEO Michael Intrator is a fan of opera, especially Puccini.
CoreWeave has struggled to shake off market concerns about the stability of its business. By the time Intrator, a former commodities trader, arrived at the reception for "The Monkey King," his company's stock price had just hit rock bottom after a three-week selloff that wiped out roughly half its value.
Intrator's fortunes turned two months later, when Nvidia agreed to invest $2 billion into CoreWeave, shoring up investor confidence. CoreWeave said it would sell multiple generations of Nvidia's upcoming chips, and Nvidia said it would use its financial strength to help CoreWeave build new data centers. CoreWeave's stock jumped more than 10% on the news before sliding again.
High expectations
That November evening at the opera house, Huang was in a festive mood. He had just beaten back Wall Street's concerns about an AI bubble with record earnings, and the mood of celebration was all around him. Giant red lanterns adorned the outdoor courtyard, and the guests inside treated themselves to food stalls akin to a Taiwanese night market.
For many young CEOs, it was their first time mingling with Silicon Valley royalty -- and Huang was happy to remind them of it. Before the crowd made its way over to the opera house, he addressed the founders in the room. "Work hard," he told them, "and you get to do this every night."
A few weeks prior, Huang had made one of Nvidia's largest-ever startup bets on one of the young founders there.
Misha Laskin is a Russian-Israeli researcher who left Google to start Reflection, which is trying to build open-source AI models whose underlying code is freely available for anyone to use and modify. Last year, the startup began searching for a big backer to support its research agenda.
Sequoia Capital, a venture firm that displays a glass-encased Ferragamo black jacket of Huang's at its Menlo Park headquarters, brokered a meeting between Laskin and Huang. At the time, Laskin knew that Huang was eager to support U.S. open-source models -- and make sure they ran on Nvidia chips.
Nvidia had written in a recent earnings report that if open-source models used chips built by competitors, it could limit demand for its products and services. The company made open-source a huge focus of its developer conference this month, announcing a new coalition of startups, including Reflection, that would work together to build the technology.
Huang agreed to make Nvidia the financial anchor for a Reflection funding round, committing half of the $1 billion target. Then he encouraged other venture firms to invest as well. By the time Laskin arrived at the opera reception, he had raised $2 billion. Nvidia ended up investing roughly $800 million.
Much of that money will flow back to Nvidia, whose engineers are working with Reflection to build a giant cluster of GPUs that the startup will use to train its models. Nvidia is also introducing Laskin to potential customers, including top U.S. companies and representatives of foreign governments who are looking to build their own "sovereign AI" technology.
Laskin has also privately told investors that there could be a revenue sharing agreement between Reflection and Nvidia if their technology is integrated and sold together.
One large investor in Reflection called the company a "business arm" of Nvidia. At a recruiting event last year in London, one of Reflection's executives put it more bluntly when speaking to a potential hire: "When you are talking to us, you are talking to Nvidia."