Broadcom CEO: AI Revenue to Surpass Combined Other Revenue Within Two Years, Cloud Giants to Dominate ASIC Chips While Enterprises Continue Relying on GPUs

Deep News
09/10

Chip giant Broadcom is positioning artificial intelligence at the core of its strategic blueprint.

According to a conference notes report released by Goldman Sachs on September 9, company President and CEO Hock E. Tan clearly stated at the recent Goldman Sachs Communacopia + Technology Conference that meeting specific customers' AI computing needs is the company's top priority, and expects that within the next two years, the company's AI-related revenue will exceed the combined total of software and other non-AI business revenue.

**AI Revenue Surge: Becoming Absolute Pillar Within Two Years, Targeting $120 Billion by 2030**

The most striking information in the report is Tan's aggressive prediction for AI revenue. He explicitly stated that meeting the AI computing needs of a few core customers is the company's primary task, and predicted that "Broadcom's AI revenue will surpass the combined software and non-AI revenue within the next two years." This marks AI's transformation from a high-growth business unit to the absolute core pillar of the entire company.

Supporting this ambition is a newly submitted "Tan PSU Award" executive incentive plan. The core clause of this scheme is that if Broadcom achieves specific AI revenue thresholds before fiscal year 2030 (FY2030), CEO Tan will receive corresponding rewards. This also means Tan's compensation is strongly tied to AI revenue.

The plan details show that the highest performance target set for the CEO is to achieve $120 billion in annual AI revenue by fiscal year 2030. According to Goldman Sachs' research report, this figure represents a five-fold increase compared to the bank's forecast of $20 billion in AI revenue for Broadcom's fiscal year 2025, highlighting management's extreme confidence in the AI business.

**Market Differentiation: Cloud Giants Favor ASICs, Enterprise Market Remains GPU Territory**

Regarding the closely watched AI accelerator market, Tan provided a clear assessment: the market will move toward differentiation.

He expects that customized ASIC chips will primarily be adopted by large cloud service providers (hyperscalers). These giants have both the capability and willingness to deeply customize chips for their specific workloads such as large language models (LLMs), pursuing ultimate performance and cost efficiency. The report indicates that Broadcom's XPU (customized processor) business opportunities mainly come from existing 7 customers and potential clients.

Meanwhile, Tan believes that the broader enterprise customer base will likely continue using commercial GPUs. This means that for enterprises without the capability or need to develop custom chips, general-purpose GPUs represented by companies like NVIDIA will remain their preferred choice for deploying AI applications.

**Network Battlefield: Ethernet to Dominate AI Clusters**

Beyond computing chips, Tan also emphasized the growth potential of AI networking business. He believes that Ethernet, as a mature technology that has been thoroughly validated over the past two to three decades, will play an increasingly important role in AI networks.

The growth momentum comes from two main aspects: first, AI networks generally adopt Ethernet technology; second, as AI computing cluster scales continue to expand, "scale-up networks" experience surging demand for Ethernet. Tan expects that within the next 18-24 months, Ethernet will begin large-scale deployment in these scale-up networks.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10