Microsoft's "Maia 200" Fuels AI ASIC Narrative, Propelling High-Speed Copper Cables, DCI, and Optical Interconnects into the AI Chip Spotlight

Stock News
Yesterday

European financial giant BNP Paribas stated in a research report released on Tuesday that with the emergence of Microsoft's (MSFT.US) upgraded second-generation in-house artificial intelligence chip, "Maia 200," which has ignited a new wave of investment frenzy in the AI computing power industry chain, leaders in custom AI chips for large AI data centers—specifically AI ASIC chips—such as US chip design giant Marvell Technology (MRVL.US) and its largest competitor Broadcom (AVGO.US), are poised to be the primary beneficiaries of this computing power investment surge. The analysts at BNP Paribas emphasized in their report that the trend of cloud computing giants, led by Microsoft, developing their own AI chips is an unstoppable force. They projected that the market share of AI computing infrastructure between ASICs and NVIDIA's AI GPU clusters could significantly increase from the current 1:9 or 2:8 ratio to nearly parity.

The analyst team at BNP Paribas, led by senior analyst Karl Ackerman, indicated that in the new wave of AI computing investment triggered by the in-house AI chip trend, aside from the two leading AI ASIC companies, data center interconnect (DCI), high-speed copper cable, and data center optical interconnect leaders are also expected to substantially benefit from this new round of AI computing investment. A deeper analysis of the entire AI computing industry chain reveals that AI ASICs, DCI, high-speed copper cables, and data center optical interconnects simultaneously benefit from two major trends: the AI ASIC super-cycle dominated by cloud giants like Microsoft, Google, and Amazon, and the AI GPU computing infrastructure ecosystem led by NVIDIA/AMD.

Whether it's Google's massive TPU AI computing clusters (TPUs also follow the AI ASIC technical path) or the vast quantities of NVIDIA AI GPU computing clusters still being aggressively purchased by AI giants like Google, OpenAI, and Microsoft, all rely fundamentally on the providers of data center interconnect (DCI), high-speed copper cables, and data center optical interconnects. Furthermore, beyond the GPU and ASIC paths, the high-performance network infrastructures—be it NVIDIA's "InfiniBand + Spectrum-X/Ethernet" or Google's "OCS (Optical Circuit Switching)"—are equally dependent on suppliers of high-speed copper cables, DCI, and optical interconnect equipment.

In essence, whether it's the "Google-led AI ASIC computing chain" (TPU/OCS) or the "OpenAI computing chain" (NVIDIA IB/Ethernet), they ultimately converge on the same set of "hard constraints": data center interconnect (DCI), data center optical interconnects, high-speed copper cables, and the data storage foundation (enterprise storage/capacity media/memory testing). Following the high-profile global launch of the Gemini 3 AI application ecosystem in late November, which led to an instantaneous surge in Google's AI computing demand, the immense AI token processing volume forced Google to significantly reduce free access to Gemini 3 Pro and Nano Banana Pro and impose temporary restrictions even on Pro subscribers. Coupled with recent South Korean trade export data showing持续强劲 demand for SK Hynix and Samsung Electronics' HBM memory systems and enterprise SSDs, this further validates Wall Street's assertion that the "AI boom is still in the early construction phase where computing infrastructure supply cannot meet demand."

According to Wall Street giants like Morgan Stanley, Citigroup, Loop Capital, and Wedbush, the global artificial intelligence infrastructure investment wave, centered on AI computing hardware, is far from over and is merely at its beginning. Driven by an unprecedented "storm of AI inference-side computing demand," this global AI infrastructure investment wave, expected to last until 2030, could reach a staggering scale of $3 to $4 trillion. As the AI inference frenzy sweeps the globe, and with the高昂 construction costs of super-large-scale AI data centers akin to "Stargate," tech giants are increasingly demanding more economical AI computing systems. Under power constraints, they are striving to maximize "cost per token" and "output per watt," heralding the arrival of a golden age for the AI ASIC technology path.

Undoubtedly, major constraints related to economics and power are forcing Microsoft, Amazon, Google, and Meta (Facebook's parent company) to push the AI ASIC technology path for their in-house cloud computing systems. The core objective is to make AI computing clusters more cost-effective and energy-efficient. Microsoft explicitly positions Maia 200 as "significantly improving the economics of AI token generation" and repeatedly emphasizes performance per dollar; AWS also frames the goal of Trainium3 as achieving the "best token economics," using energy efficiency/cost-effectiveness as a selling point; Google Cloud Platform defines Ironwood as a dedicated TPU for the "AI inference era" (TPUs also belong to the AI ASIC path), highlighting energy efficiency and large-scale inference services.

As DeepSeek ignites an "efficiency revolution" in AI training and inference, pushing future large model development towards the dual cores of "low cost" and "high performance," AI ASICs—which hold a cost-effectiveness advantage over the NVIDIA AI GPU path—are entering a stronger demand expansion trajectory than during the 2023-2025 AI boom, against the backdrop of surging cloud AI inference computing demand. Major future customers like Google, OpenAI, and Meta are expected to continue investing heavily in partnership with Broadcom to develop AI ASIC chips.

The BNP Paribas analyst team noted that while Marvell Technology and Broadcom are unlikely to be the chip design partners for Microsoft's in-house Maia 200 AI chip—BNP believes the exclusive technical supplier co-developing Maia 200 with Microsoft might be Taiwan's Global Unichip, similar to Broadcom's partnership with Google on TPU AI computing clusters—they still believe that as Microsoft ignites a new wave of AI computing investment, Marvell and Broadcom could broadly benefit from the investment theme led by the in-house AI chip trend by virtue of their status as "absolute ASIC leaders."

A recent research report from Morgan Stanley indicated that the actual production volume of Google's TPU AI chips is expected to reach 5 million and 7 million units in 2027 and 2028, respectively, representing significant upward revisions of 67% and 120% from the financial giant's previous forecasts. This surge in expected production may signal that Google is preparing to begin direct external sales of its TPU AI chips. More fundamentally impactful, Morgan Stanley's research calculates that for every 500,000 TPUs sold externally, Google could generate an additional $13 billion in revenue and up to $0.40 in earnings per share.

Market research firm Counterpoint Research predicted in its latest report that the core AI chips for non-AI GPU servers—namely the AI ASIC camp—will experience a high-growth curve in the near term. Shipments are forecast to triple by 2027 compared to 2024 levels, and are expected to surpass GPU shipments in 2028, reaching over 15 million units. The report attributes this explosive growth to strong demand for Google's TPU infrastructure, the ongoing expansion of AWS Trainium clusters, and capacity increases from Meta (MTIA) and Microsoft (Maia) as they expand their portfolios of in-house developed AI chips for their cloud computing systems.

Other potential beneficiaries listed by BNP Paribas in its report include data center high-speed copper cable (DAC/AEC) leader Amphenol (APH); data center optical interconnect participants like Lumentum; data center interconnect (DCI) leader Arista Networks (ANET.US); and, if active electrical cables (AEC) are used, potentially also Credo Technologies (CRDO.US) and Astera Labs (ALAB.US). Whether it's the high-performance networking dominated by NVIDIA's InfiniBand / Spectrum-X Ethernet or the OCS (Optical Circuit Switching) introduced by Alphabet's Google in its Jupiter architecture, everything ultimately depends on the "physical interconnect stack"—high-speed copper interconnects (short reach) between servers/accelerator card clusters and switches, optical interconnects (medium to long reach) at the rack/room/building scale, and DCI optical transmission/interconnect that bridges network and storage domains across buildings/campuses/sites.

NVIDIA's own definition of its Ethernet platform includes switches, NICs/SmartNICs, DPUs, and cables/transceivers (LinkX) as end-to-end components, explicitly listing coverage from DAC copper cables to AOC/multimode/single-mode optics. From the perspective of global AI data center super-projects like "Stargate," a more precise description is that copper and optical interconnects each have their distinct roles. Within hyperscale AI clusters, short-reach, high-speed interconnects (e.g., switch to server NIC) within a rack or to adjacent racks typically prioritize DAC/AEC (copper cables) to minimize latency, power consumption, and cost. When bandwidth scales to 400/800G and distances increase (across racks/rows/rooms), link budget and power consumption push the solution towards AOC/pluggable high-speed optical module-level optical interconnect equipment, or even more aggressive paths like silicon photonics/CPO.

On the other hand, Google's主导的 OCS introduces optical circuit switching at the data center network architecture level in Jupiter to address scaling and evolution challenges related to cabling/capacity—it is inherently more "optical-centric," but still relies on port-side electro-optical interfaces and high-speed cabling systems. Lumentum is considered one of the biggest winners from Google's AI explosion primarily because it provides the indispensable optical interconnects—namely OCS (optical circuit switches) + high-speed optical components—deeply integrated into the "high-performance network foundation system" for Google's TPU AI computing clusters; as the number of TPUs increases by an order of magnitude, Lumentum's shipments multiply accordingly.

On the OpenAI chain side, NVIDIA's dominant "InfiniBand and Ethernet" is deeply tied to switches and data center network systems from companies like Arista Networks (ANET.US), Cisco (CSCO), and HPE (HPE), but this does not "exclude optical interconnects." On the contrary, NVIDIA's "IB+Ethernet" successfully packages "the determinism of copper + the long-distance/high-bandwidth density of optics" into a standardized interconnect system.

Analyst Ackerman and colleagues wrote in their research report to clients, "We believe the Maia 200 rack-level AI computing infrastructure system will comprise 12 large compute trays, 4 Tier-1 Ethernet scale-up switches, 6 CPU head nodes, 2 Top-of-Rack (ToR) Ethernet switches for the front-end network, and 1 out-of-band management switch." Ackerman and other analysts added, "Most interesting to us is that Maia 200 may not use a backend scale-out network architecture." He also stated, "Maia 200 will be deployed in small scale-up clusters, unitized at 6,144 ASICs, which will connect to the 'external world' via CPU head nodes and front-end Ethernet switches. Given that Maia 200 is essentially tailor-made for inferencing workloads, we believe this topology is rational, as massive AI inference workloads may not require superclusters composed of hundreds of thousands of ASICs needing synchronous collaboration."

The analyst team led by Ackerman further added that the large-scale deployment of Microsoft's in-house Maia 200 AI computing infrastructure is expected to begin accelerating in the second half of 2026 and will further advance its penetration into Microsoft's various large data centers next year.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10