Amazon.com's AI infrastructure strategy has reached a critical milestone with the full operational launch of its flagship data center. CEO Andy Jassy recently announced on social media platform X that a former cornfield near South Bend, Indiana, now hosts Project Rainier—one of the world's largest AI computing clusters. Developed jointly by AWS and AI unicorn Anthropic, the system deploys nearly 500,000 proprietary Trainium2 chips, representing a 70% scale increase over any previous AWS AI platform.
Jassy revealed that Anthropic is currently using this system to train and run its Claude large language model, delivering over 5x more computing power than previous training setups. Amazon plans to double Trainium2 deployments to 1 million chips by year-end, signaling a strategic shift from infrastructure planning to production capacity realization—a pivotal turning point for its AI business.
Analysts project significant growth from this expansion: Morgan Stanley forecasts AWS revenue growth of 23% and 25% over the next two years, while Bank of America estimates Anthropic alone could generate $6 billion in incremental AWS revenue by 2026.
**Redefining AI Infrastructure Scale** Project Rainier's activation marks the beginning of AWS's large-scale AI capacity expansion. The distributed system connects tens of thousands of super servers across U.S. data centers via NeuronLink technology, minimizing latency while maximizing computational efficiency. With plans to add 1GW capacity and approximately 500,000 more Trainium2 chips by December, Amazon aims to double AWS's GW capacity by 2027.
AWS CEO Matt Garman emphasized the superiority of these custom chips over generic alternatives, with Jassy noting during earnings calls: "Trainium2 adoption continues accelerating—current capacity is fully booked. This business is scaling rapidly."
**Proprietary Chip Strategy Gains Traction** At the core of Amazon's AI strategy lies its dual-engine chip architecture: the Trainium series for AI training and Inferentia series for inference. This approach is now demonstrating tangible results—the Trainium business has grown into a multi-billion dollar operation with 150% quarterly growth, simultaneously reducing model training costs while improving AWS margins.
Amazon is already preparing Trainium3 for potential release at this year's re:Invent conference, with broader deployment expected in 2026. The next-gen chip promises performance upgrades and expanded accessibility, signaling AWS's push beyond elite clients into broader enterprise markets.
Bank of America analyst Justin Post observes that the cost optimization from proprietary chips is materializing: "Trainium adoption has significantly reduced training and inference expenses, creating a new multi-billion dollar growth engine while boosting AWS margins." Notably, most token usage on Amazon's Bedrock AI platform already runs on Trainium chips—a platform Jassy compares to AWS's foundational EC2 service in long-term potential.
**Morgan Stanley Upgrades Amazon: AWS Enters "AI Acceleration Cycle"** Morgan Stanley recently named Amazon a "Top Pick," raising its price target from $300 to $315 (25% upside potential), citing four key growth drivers for AWS's "AI acceleration cycle": 1. Rapid capacity expansion: Additional 1GW compute power and doubled Trainium2 chips by year-end 2. Structural expansion: 10GW new data center capacity planned over 24 months 3. Surging AI demand: October new bookings exceeded Q3 totals, with ~$18 billion monthly new business 4. Innovation momentum: Trainium3 expected this year alongside Bedrock platform expansion
The report highlights AWS's current "capacity-constrained" status as paradoxical growth fuel—October's new business volume surpassed entire Q3 figures, suggesting approximately $18 billion in new commitments. "Growth would be faster without compute bottlenecks," analysts noted, "These expansion plans pave the way for reacceleration in 2026-2027."
Morgan Stanley raised Amazon's 2026/2027 capex estimates by 13%/19% to $169 billion/$202 billion respectively, with $140 billion/$170 billion allocated to technology and infrastructure—surpassing peers like Microsoft, Meta, and Alphabet. Despite massive investment, analysts characterize this as "very early stages," with capacity being "immediately absorbed when available," presenting "unprecedented opportunities for AWS customers."