Broadcom (AVGO) Single-Handedly Revives "AI Faith"! AI ASIC Demand Following "NVIDIA-Style Growth Trajectory"

Stock News
09/05

Broadcom (AVGO), one of the biggest beneficiaries of the global AI boom, reported its fiscal third quarter 2025 results for the period ended August 3 on the morning of September 5, Beijing time. As a core chip supplier to Apple and other major tech companies, Broadcom is also a key provider of high-performance Ethernet switch chips for large AI data centers and AI ASICs—custom AI chips crucial for AI training and inference.

Following the earnings release, Broadcom's stock surged nearly 5% in after-hours trading, single-handedly reviving the recently subdued "AI faith" and proving to investors that tech giants like Google and Meta, along with AI leaders like OpenAI, continue to maintain strong spending momentum in AI computing infrastructure.

Broadcom's robust performance data and future outlook have comprehensively restored U.S. tech stock investors' faith in artificial intelligence, driving chip stocks to regain upward momentum in after-hours trading—something even "AI chip leader" NVIDIA (NVDA) failed to achieve in late August.

Mixed results from Salesforce (CRM), Marvell Technology (MRVL), and NVIDIA prompted some investors cautious about "AI monetization pathways" to heavily sell popular tech stocks. They generally believe the AI investment frenzy has created a tech stock bubble, combined with market expectations of U.S. economic "stagflation" under Trump's tariff policies, leading to continued declines in U.S. tech stocks since early September.

However, the strongest force in the AI ASIC field—Broadcom, with its $1.4 trillion market cap—used strong performance and optimistic outlook to tell investors that AI computing demand still shows explosive growth trends, especially AI ASIC and high-performance Ethernet chip demand growth comparable to NVIDIA's unprecedented data center AI GPU demand growth in 2023-2024.

Broadcom CEO Hock Tan told Wall Street analysts during the earnings call that the chip giant's AI-related revenue prospects for fiscal 2026 will "significantly" expand, an optimistic forecast that comprehensively alleviates market concerns about slowing AI computing growth.

During the post-earnings conference call, Tan said the chip company is working with more potential major customers to develop AI training/inference acceleration chips—a market currently dominated by NVIDIA AI GPUs, but Broadcom-led AI ASIC approaches are beginning to achieve explosive market growth in AI training/inference.

Broadcom's stock has repeatedly hit record highs this year, driven by unprecedented AI investment fervor, joining NVIDIA and TSMC in driving the entire AI computing supply chain into a hot bull market trajectory.

"In the last quarter, one of the potential customers placed a large-scale AI infrastructure-related production order with Broadcom," he said during the analyst exchange without revealing the customer's name. "We now expect fiscal 2026 AI infrastructure-related revenue prospects to significantly expand beyond the already strong growth rates we mentioned last quarter."

In the previous earnings call, Tan said 2026 AI-related revenue prospects would show similar growth trajectory to this year—expecting growth of about 50% to 60%. Now, with a new major customer that Tan described as having "timely and massive" demand joining, AI-related revenue growth speed will upgrade "substantially and considerably," Tan stated.

In the earnings report released on the morning of September 5, Beijing time, Broadcom management expects fourth quarter (ending October) total revenue of approximately $17.4 billion, higher than Wall Street analysts' average expectation of about $17.05 billion, implying potential year-over-year growth of about 25%.

Before the earnings release, market expectations for Broadcom's performance and future outlook data were very high, so exceeding market expectations significantly boosted investor sentiment toward Broadcom and the entire AI computing supply chain.

Since April's year-to-date low, Broadcom's stock has more than doubled, adding approximately $730 billion to the company's market value, making it the third-best performing stock in the NASDAQ 100 index, with stronger stock performance than NVIDIA.

Investors have recently been looking for signs that AI computing spending remains strong. Last week, NVIDIA provided mixed performance guidance, raising concerns about an AI industry bubble burst. Although Broadcom hasn't experienced the explosive market cap expansion that NVIDIA has—NVIDIA's market cap has grown over $3 trillion since 2023—it's still viewed as one of AI boom's core beneficiaries.

Hyperscale customers developing and running continuously updated AI large models—such as Google and Facebook parent Meta—rely heavily on Broadcom's custom-designed AI ASIC chips and high-performance networking equipment to handle massive AI workloads.

During Google and Meta earnings calls, Pichai and Zuckerberg both indicated increased efforts to partner with chip manufacturer Broadcom on self-developed AI ASICs. Both giants' AI ASIC technology partners are custom chip leader Broadcom, such as Google's TPU (Tensor Processing Unit) developed with Broadcom, which is a typical AI ASIC.

During the earnings call, Tan said he and the board have agreed he will serve as Broadcom CEO at least until 2030.

Performance data shows that in the third quarter ended August 3, Broadcom's total revenue grew 22% to nearly $16 billion. Excluding certain items, adjusted profit was $1.69 per share, higher than Wall Street analysts' average expectations of revenue around $15.8 billion and earnings per share expectation of $1.67—both expectations have been continuously revised upward recently.

Broadcom's third quarter AI infrastructure-related semiconductor revenue was approximately $5.2 billion, with year-over-year growth of 63%, higher than Wall Street's average expectation of $5.11 billion. Broadcom management expects this category revenue to reach approximately $6.2 billion in the fourth quarter, higher than analysts' previous expectation of about $5.82 billion.

In recent days, other chip manufacturers focused on AI computing infrastructure have performed poorly. Marvell Technology Inc., one of Broadcom's competitors in the custom semiconductor market, saw its stock plummet 19% last Friday after its data center business revenue fell short of expectations.

Besides partnering with major customers like Google to develop custom AI accelerators—AI ASIC chips—Broadcom has also been upgrading its high-performance networking equipment to better transmit information between AI server systems at the core of AI data centers.

As Tan's latest comments suggest, Broadcom continues to make positive progress in finding major customers seeking high-performance equipment for high-load AI training/inference tasks.

Through years of acquisitions, Tan has built Broadcom into a giant spanning both software and hardware domains. Besides semiconductor business closely related to AI infrastructure, this chip giant headquartered in Palo Alto, California, also provides core network connectivity components for Apple's iPhone devices.

**The "AI ASIC Super Wave" Led by Google and Meta Approaches**

As U.S. tech giants firmly invest heavily in artificial intelligence, the biggest beneficiaries include not only NVIDIA but also AI ASIC giants like Broadcom, Marvell Technology, and Taiwan's Alchip Technologies.

Microsoft, Amazon, Google, and Meta, as well as generative AI leader OpenAI, are all partnering with Broadcom or other ASIC giants to update and iterate AI ASIC chips for massive inference-side AI computing deployment.

Therefore, AI ASIC future market share expansion is expected to significantly outpace AI GPUs, trending toward equal market share rather than the current NVIDIA AI GPU dominance—occupying up to 90% of the AI chip field.

With absolute technological leadership in chip-to-chip interconnect communication and high-speed data transmission, Broadcom has become the most important participant in AI field ASIC custom chips in recent years. For example, Google's self-developed server AI chip—TPU AI acceleration chip—Broadcom is a core participant, with Broadcom and Google teams jointly participating in TPU AI acceleration chip development.

Besides chip design, Broadcom also provides Google with critical chip-to-chip interconnect communication intellectual property and handles manufacturing, testing, and packaging of new chips, helping Google expand new AI data centers.

Broadcom's high-performance Ethernet switch chips are mainly used in data center and server cluster equipment, responsible for efficiently and rapidly processing and transmitting data flows. Broadcom Ethernet chips are essential for building AI hardware infrastructure because they ensure high-speed transmission of data between GPU processors, storage systems, and networks, which is extremely important for generative AI like ChatGPT, especially applications requiring massive data input and real-time processing capabilities, such as DALL-E text-to-image and Sora text-to-video large models.

Based on Broadcom's unique chip-to-chip interconnect communication technology and numerous patents in data transmission flows, Broadcom has become the most important participant in the AI hardware field's AI ASIC chip market. Not only does Google continuously choose to cooperate with Broadcom in designing and developing custom AI ASIC chips, but Apple, Meta, and other giants, as well as more data center service operators, are expected to partner with Broadcom long-term to create high-performance AI ASICs.

According to management, at the beginning of the year's earnings meeting, Broadcom projected that by fiscal 2027, the potential market size for AI components (Ethernet chips + AI ASICs) it creates for global data center operators will reach $60-90 billion.

One of Broadcom's major customers, Google, disclosed the latest details of Ironwood TPU (TPU v6) at a conference, showing remarkable performance improvements. Compared to TPU v5p, Ironwood's peak FLOPS performance improved 10-fold, with 5.6x efficiency improvement. Compared to Google's TPU v4 launched in 2022, Ironwood's single-chip computing power improvement exceeds 16x.

Google's disclosed data clearly shows its TPU platform performance evolution roadmap. Ironwood's single-chip peak computing power reaches 4614 TFLOPs, equipped with 192 GB HBM and 7.4 TB/s bandwidth. In comparison, TPU v4 released in 2022 had single-chip computing power of 275 TFLOPs, 32 GB HBM, and 1.2 TB/s bandwidth. TPU v5p launched in 2023 had single-chip computing power of 459 TFLOPs, 95 GB HBM, and 2.8 TB/s bandwidth.

Performance comparison shows: Google Ironwood's 4.2 TFLOPS/watt efficiency is only slightly lower than NVIDIA B200/300 GPU's 4.5 TFLOPS/watt.

JPMorgan commented: This performance data highlights that advanced AI's dedicated AI ASIC chips are rapidly closing the performance gap with market-leading AI GPUs, driving hyperscale cloud service providers to increase investment in more cost-effective custom ASIC projects.

According to Wall Street financial giant JPMorgan's latest forecast, this chip uses 3nm advanced process technology in cooperation with Broadcom, will mass produce in the second half of 2025, and Ironwood is expected to bring approximately $10 billion revenue to Broadcom in the next 6-7 months.

Notably, media reports indicate Google recently contacted some cloud service providers mainly leasing NVIDIA AI GPU server clusters, expressing hope that their data centers could also deploy Google TPU computing clusters. According to company representatives involved in the transaction, Google has reached agreements with at least one cloud service provider, including London-based Fluidstack, which will deploy Google TP computing clusters in New York data centers.

Gil Luria, a well-known analyst from Wall Street investment firm D.A. Davidson, said more cloud service providers and major AI application developers are interested in Google TPU, hoping to reduce dependence on NVIDIA. D.A. Davidson found after communicating with researchers and engineers from multiple cutting-edge AI labs that engineers have very positive evaluations of Google's AI training/inference custom acceleration chip.

**After DeepSeek Shocks the World, Broadcom Stock Outperforms NVIDIA! Wall Street Optimistic About Broadcom Continuing to Hit New Highs**

Global continuously explosive AI computing demand expansion, combined with increasingly large U.S. government-led AI infrastructure investment projects and tech giants continuously investing heavily in building large data centers, largely means that for investors long devoted to NVIDIA and the AI computing supply chain, the "AI faith" sweeping the globe has far from finished its "super catalytic" effect on computing leaders' stock prices. They bet that AI computing supply chain companies led by NVIDIA, TSMC, and Broadcom will continue to perform "bull market curves," driving global stock markets to continue bull market trends.

It's precisely under the epic stock price gains and consistently strong performance of AI computing supply chain leaders like NVIDIA, Google, TSMC, and Broadcom this year that an unprecedented AI investment boom has swept U.S. and global stock markets, driving the global benchmark stock index—MSCI Global Index—to surge significantly since April, recently continuously hitting record highs.

After DeepSeek R1 emerged at the end of January and shocked Silicon Valley and Wall Street, causing the U.S. stock market's AI computing sector to experience record single-day crashes, Broadcom's stock gains since then have been stronger than AI chip leader NVIDIA's gains.

As DeepSeek completely sparked an "efficiency revolution" in AI training and inference, driving future AI large model development toward "low cost" and "high performance" core focuses, AI ASICs enter stronger demand expansion trajectories than the 2023-2024 AI boom period against the backdrop of surging cloud AI inference computing demand. Future major customers like Google, OpenAI, and Meta are expected to continue investing heavily in partnering with Broadcom to develop AI ASIC chips.

As large model architectures gradually converge toward several mature paradigms (such as standardized Transformer decoders, Diffusion model pipelines), more cost-effective AI ASICs can more easily handle mainstream inference computing loads. Additionally, some cloud service providers or industry giants will deeply couple software stacks, making ASICs compatible with common network operators and providing excellent developer tools, accelerating ASIC inference adoption in normalized/massive scenarios.

NVIDIA AI GPUs may focus more on ultra-large-scale cutting-edge exploratory training, rapidly changing multimodal or new structure rapid experimentation, and general computing power like HPC, graphics rendering, and visual analysis.

Given Broadcom's Ethernet switch chips and AI ASIC chip demand continues explosive growth, Wall Street is generally bullish on Broadcom's stock prospects, optimistic about Broadcom continuing to hit new highs.

Evercore recently raised Broadcom's 12-month target price from $304 to $342, while Morgan Stanley raised Broadcom's target price from $338 to $357.

Furthermore, "silicon photonics technology" is expected to become an important catalyst for Broadcom's stock toward a new bull market curve. The "silicon photonics technology" wave led by global chip giants like NVIDIA, TSMC, and Broadcom is about to evolve into an unprecedented revolution sweeping the entire AI computing supply chain—the "silicon photonics revolution," meaning CPO and optical I/O technology routes will accelerate penetration from cutting-edge laboratories to global applications soon.

Broadcom is developing its own CPO high-performance switch chip solutions (combining its Tomahawk series flagship switch chip products) on one hand, and accumulating technology in optical interconnect through acquisitions (previously acquired fiber module manufacturer Brocade) on the other. Broadcom has extensive global cloud vendor customer base and mature switch ASIC business, and large-scale introduction of CPO technology will undoubtedly significantly improve its switch system products' competitiveness.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10