Google’s annual cloud summit Cloud Next 2026 has sent a clear signal: the competition for enterprise AI has shifted from experimentation to governance and large-scale deployment, and Google offers a complete vertical stack spanning chips to platforms as its solution. More than a product launch event, this summit marks a critical turning point, as agentic AI crosses the threshold from proof-of-concept to enterprise production deployment.
According to Trading Pulse, JPMorgan analyst Doug Anmuth wrote in a post-event note: “This shift from experimentation to deployment is perhaps the strongest evidence that agentic AI is moving beyond the proof-of-concept stage toward enterprise-grade workloads.”
Demand-side data validates this trend. The processing capacity of Google’s first-party models via direct API access has reached 16 billion tokens per minute, a sharp rise from 10 billion in the previous quarter. Around 75% of Google Cloud customers are now using its AI products, and paid monthly active users of Gemini Enterprise increased by 40% quarter-on-quarter in the first quarter.
Three major financial institutions — JPMorgan, BofA Securities and Citi Research — all maintained their buy ratings on Alphabet after the event, with target prices set at $395, $370 and $405 respectively. Their shared logic: Google Cloud’s revenue growth continues to outpace its advertising business. The combination of Gemini models, in-house TPU chips and enterprise orchestration platforms has built a differentiated moat and is poised to become a more direct share price catalyst. Meanwhile, Sundar Pichai announced a 2026 capital expenditure range of $175 billion to $185 billion. The market remains highly focused on the company’s capex trajectory ahead of and after the earnings release.
Enterprise Clients Shift Focus: From Trial Use to AI Governance
While Cloud Next in the past two years focused on demonstrating technical capabilities, the core theme of this year’s event centers on scaling AI from experimental trials among early adopters into fully operational, governable and cost-controlled production workloads.
JPMorgan’s research review outlines this evolutionary path: 2024 focused on Gemini integration with Workspace and early AI agent exploration; 2025 emphasized A2A protocols and the 7th‑generation Ironwood TPU; and 2026 centers on Agentic Cloud, data accessibility, AI infrastructure cost efficiency and cybersecurity. All these directions converge on one core goal: advancing AI agents from pilot projects to sustainable production operations.
Ronald Josey, analyst at Citi Research, put it more straightforwardly. As corporate managers begin to oversee multiple cross-workflow AI agents, enterprises are evolving from simply adopting large language models to reshaping business processes with agentic workflows. Google Cloud is betting heavily on this industry shift and positioning itself as the core operating system for agentic enterprises.
This explains why the conference focused heavily on two major themes: computing and network architectures optimized for agent workflows, and platform upgrades transformed into an “agent factory”. Google did not release any financial updates at the event, but adopted real customer adoption data to prove that its AI products are running stably in production environments. Internal data shows that approximately 75% of new Google code is AI-generated and reviewed by engineers, and security threat response time has been cut by more than 90%.
8th-Generation TPU: Inference Decoupled from Training to Form an Independent Capital Growth Story
The most structurally significant hardware upgrade at this summit is the official split of the 8th‑generation TPU into two independent product lines. TPU 8t is designed for high-throughput training workloads, while TPU 8i is a dedicated chip built from the ground up and optimized for real-time inference.
JPMorgan clearly explained the logic behind this bifurcated architecture. Equipped with the new Virgo Network Fabric, TPU 8t supports hyper-scale cluster expansion with over one million chips per cluster, delivering peak performance roughly three times that of the previous-generation Ironwood, and greatly shortening the training cycle for trillion-parameter frontier models. TPU 8i adopts a new Boardfly network topology with on-chip SRAM tripled, specifically breaking the latency and memory bottlenecks restricting large-scale agentic inference. Citi added efficiency data: TPU 8i reduces latency by nearly 80% compared with the 7th‑generation TPU and improves cost performance by around 80%.
JPMorgan pointed out a key industry judgment. Since inference workloads no longer rely on repurposed training chips and require customized ASIIC optimization, Google confirms that inference computing demand has grown large enough to support independent chip research and development and targeted capital allocation. This brings structural changes to revenue composition: growth is no longer solely driven by training demand, but by continuous inference consumption, forming an independent long-term growth curve.
Notably, all three research reports mentioned that management did not discuss the possibility of external TPU sales at the event. At this stage, Google’s hardware strategy mainly serves internal usage and cloud service sales, rather than independent commercialization of hardware products.
Platform Restructuring: Vertex AI Upgraded Into a Unified Governance Hub for Enterprise AI Agents
Beyond hardware iteration, platform restructuring is another key structural change. Google launched the Gemini Enterprise Agent Platform, which JPMorgan noted effectively supersedes the original Vertex AI. It integrates enterprise agent development, workflow orchestration, lifecycle governance and security management into one unified entry point, replacing scattered and isolated functional modules.
BofA Securities divided this platform upgrade into three layers. The infrastructure layer launches AI Hypercomputer, integrating GPU/TPU, high-speed networks, storage and optimized software to cover the full lifecycle of AI training and inference. The platform layer builds comprehensive capabilities around four dimensions: build, scale, govern and optimize, including low-code or no-code agent creation, centralized management, cross-ecosystem orchestration covering Google Workspace, Microsoft 365 and third-party applications, as well as built-in observability and traceability tools. The application layer embeds agent capabilities into high-frequency office tools such as Gmail, Docs and Chat through Workspace Intelligence, enabling multi-step cross-application automated tasks.
Citi offered a different perspective, emphasizing that the core value of the new platform lies in enabling enterprises to manage and operate multiple AI agents under one unified system. In product logic, this means the threshold for large-scale agent deployment no longer depends entirely on enterprise technical strength. Standardized platform capabilities allow more enterprises to skip customized engineering and quickly complete production deployment.
Google Validates Full-Stack AI Competitiveness with Internal Production Data
With no financial data disclosed at the keynote, Google adopted quantifiable internal operational cases to verify the large-scale landing of agentic AI. Citi summarized these practical applications into four categories:
R&D: Around 75% of new internal code is AI-generated and reviewed by engineers, up from 50% in October 2025 and 30% in Q1 2025, reflecting rapid penetration. A major code migration project was completed six times faster than a year ago.
Marketing and Content Production: The overall production cycle from initial concept to video materials is shortened by 70%, with a 20% increase in conversion rates.
Cybersecurity: Google Cloud automatically processes tens of thousands of unstructured threat reports every month, cutting threat mitigation time by over 90%. Its security portfolio is strengthened through the integration of Wiz and Mandiant. Citi also mentioned that AI has compressed the average vulnerability exploitation window to minus seven days, further highlighting the strategic value of automated security orchestration.
Customer Service: YouTube deployed an AI voice agent within six weeks to handle inbound calls for NFL Sunday Ticket and YouTube TV. The solution features low latency, high accuracy and bilingual support.
All three institutional reports cited these practical cases to distinguish real enterprise workloads from demonstration-only demos, supporting the expectation of upward performance momentum for Google Cloud in the current quarter.
$175 Billion–$185 Billion Capex Guidance: Stable Spending Plan, Not a Growth Ceiling
Sundar Pichai’s disclosure of the 2026 capital expenditure range is the only core financial guidance released at the event, and also the main point of divergence among the three Wall Street institutions.
JPMorgan offered a prudent interpretation. The clear disclosure of the capex range raises the probability of stable guidance in the upcoming earnings release, instead of signaling a peak in capital spending. It forecasts Alphabet’s 2026 capex at $181 billion and 2027 capex at $226 billion, representing a 25% year-on-year increase and 12% above market consensus. Meanwhile, senior executives including Amin Vahdat and Jeff Dean emphasized that AI infrastructure remains supply-constrained, suggesting further upside for capital spending and ruling out the conclusion that the announced range acts as a hard ceiling.
BofA Securities listed rising capex and compressed free cash flow as key downside risks, noting that heavy AI investment is the core factor squeezing profit margins.
The three institutions share one consensus: Cloud Next has proven Google’s complete product and infrastructure layout for agentic AI. The core focus of the next few quarters will be whether such heavy AI investment can deliver solid Cloud growth and margin improvement without a notable decline in cash flow.
Three Banks Maintain Buy Ratings with Differentiated Risk and Valuation Logic
All three investment banks kept their buy ratings unchanged, yet with different valuation anchors and core arguments.
JPMorgan maintained an Overweight rating with a 12-month target price of $395, based on 29x its 2027 GAAP EPS forecast of $13.51. It listed Alphabet as a top overall pick, driven by steady search and YouTube advertising growth, expanding non-advertising businesses and long-term option value from Waymo.
BofA Securities reiterated its buy rating with a $370 target price, valued at 27x 2027 core GAAP EPS plus per-share cash. It continues raising the weighting of Cloud in the SOTP valuation framework, estimating a market cap contribution of $1.2 trillion for the cloud business based on a 10x revenue multiple, supported by cloud margin expansion and AI monetization potential.
Citi Research maintained a buy rating and issued the highest target price of $405, corresponding to 29x 2027 GAAP EPS of $13.92. Its bullish logic focuses on accelerating Google Cloud revenue growth driven by TPU and Gemini demand, as well as strong resilience in search business driven by solid query volume.
On risk factors, intensified AI competition and sustained search traffic diversion are common concerns. JPMorgan and BofA both highlighted regulatory pressure from the EU DMA. BofA regarded slower-than-expected LLM integration within core search as the biggest short-term uncertainty. Alphabet’s Q1 earnings, scheduled for after-hours release on April 29, will become the next key market catalyst.