AI demand remains elevated as Citibank raises CoWoS capacity expectations, with NVIDIA product iterations and cloud vendor ASIC deployments serving as primary catalysts.
According to market intelligence, Citibank's latest research report has increased its capacity forecast for Taiwan Semiconductor Manufacturing's CoWoS advanced packaging technology from 800,000 wafers to 870,000 wafers for 2026, reflecting sustained strong demand in the artificial intelligence sector and growth opportunities driven by larger chip sizes, ASIC accelerator volume expansion, and application diversification.
Citibank analysts noted that despite tepid guidance from some downstream ODM manufacturers, supply chain leaders represented by Hon Hai maintain strong momentum and optimistic outlooks. NVIDIA's order prospects remain favorable, with the company's wafer revenue at Taiwan Semiconductor Manufacturing expected to achieve year-over-year growth exceeding 50% in 2026.
The research report emphasizes that cloud service providers' (CSP) aggressive ASIC development initiatives will become the second engine driving Taiwan Semiconductor Manufacturing's robust growth. Advanced packaging demand is expanding to additional applications including server CPUs, further supporting optimistic expectations for CoWoS capacity.
**AI Infrastructure Growing Complex, Industry Barriers Rising**
As AI chips advance toward more sophisticated process nodes, data transmission speed requirements increase and system design complexity escalates, continuously raising industry barriers. Citibank's report indicates that by 2027/2028, AI systems may reach power consumption of 800-900 kilowatts per rack, placing higher demands on cooling and power systems.
With increasing importance of scaling and high-speed serial/parallel interfaces (SerDes I/O), more network switching chips and server CPUs will adopt advanced packaging technologies, further driving CoWoS demand. Citibank believes that increased complexity in chip and system design will enable leading suppliers in the AI supply chain to continue enjoying superior growth prospects.
"Despite facing product iteration challenges across upstream and downstream supply chains, based on recent comments from cloud service providers, we see AI demand still surging," stated Citibank analyst Laura Chen in the report. "We are not concerned about upstream and downstream supply chain mismatches."
**NVIDIA AI Chips Maintain Strength, Next-Generation Products Accelerate Iteration**
According to Citibank research, NVIDIA's GB200 remains the primary configuration for AI data centers, with GB300 expected to gradually ramp up in Q4 2025. More notably, Citibank expects NVIDIA's next-generation Vera Rubin system to be officially unveiled at the 2026 GTC conference and deployed in the second half of 2026.
The Vera Rubin system will continue building on current GB200/300 system design and Oberon architecture while introducing N3 process GPUs, higher memory density (including LPDDR and HBM4), dual CX9 connectors, and increased power, supporting long-term growth for Taiwan Semiconductor Manufacturing, King Yuan Electronics, and ASE Group in AI chip business.
Citibank noted that chip-to-system delivery cycles may require nine months. Although facing product iteration challenges, upstream supply chains are prepared for smooth GB300 volume production.
**Cloud Vendor ASIC Accelerators Become Second Growth Engine**
Citibank's report specifically highlights that among all cloud service providers, Google and Amazon AWS demonstrate the most aggressive development in proprietary ecosystem development, while Meta is also increasing investment. Citibank expects ASIC chip shipments to reach 400,000-500,000 units in 2026.
In Google's TPU supply chain, Broadcom remains the primary supplier, but Google's internal teams are also collaborating with MediaTek. Analysts expect MediaTek's $1 billion ASIC revenue target to be achieved on schedule, considering at least 200,000 chip shipments in 2026.
For AWS, N3 process Trainium 3 is expected to achieve larger-scale mass production in the second half of 2026. While the supply chain has discussed N2 process Trainium 4, Citibank believes this is more likely a 2027 or 2028 product.
Microsoft remains relatively slow in proprietary AI ASIC development, still primarily relying on NVIDIA and AMD GPU solutions, but Citibank observes Microsoft has resumed Maia 300-related activities, expecting small-scale mass production next year.
**Advanced Packaging Demand Expands, Diversified Applications Provide New Growth Points**
Citibank research emphasizes that advanced packaging technology applications are expanding from AI accelerators to broader fields. As scaling and high-speed data transmission requirements increase, network switching chips and server CPUs are beginning to adopt CoWoS and other advanced packaging technologies, bringing Taiwan Semiconductor Manufacturing additional growth opportunities.
The report also notes that with increasing system complexity and elevated data transmission requirements, AI infrastructure construction faces higher barriers, enabling industry-leading suppliers to gain competitive advantages and enjoy superior growth prospects.
"Taiwan Semiconductor Manufacturing's CoWoS demand growth primarily stems from larger chip sizes, ASIC accelerator volume expansion in the second half of 2026, and expansion into other applications such as server CPUs," Citibank analysts concluded in the report. These factors collectively support Citibank's upward revision of Taiwan Semiconductor Manufacturing's CoWoS capacity expectations.