NVIDIA's annual GTC conference delivered a core message: the commercial logic of AI computing power is undergoing a fundamental restructuring—Tokens have become the new commodity, and compute power equates to revenue. At this GTC, NVIDIA management significantly raised the data center sales visibility from the previous $500 billion (covering through 2026) to over $1 trillion (cumulative for 2025 to 2027). They clarified that sales from the standalone Vera CPU and LPX rack solutions would be additional to this figure. Wall Street views this conference as strong validation of the sustainability of NVIDIA's AI cycle. According to analysis, a J.P. Morgan report notes this figure implies an upside of at least $50 to $70 billion relative to current Wall Street consensus expectations for 2026-2027 data center revenue. A Bank of America Securities report directly quoted NVIDIA management's statement—"Tokens are the new commodity, compute equals revenue"—and highlighted that the Blackwell system achieves up to a 35x reduction in cost per token compared to the previous-generation Hopper. The upcoming Rubin series is expected to further reduce this by 2x to 35x, depending on workload type and architectural configuration. Within NVIDIA's narrative framework, this continuously compressing token cost curve is the fundamental driver for scaling demand expansion. Demand visibility has doubled, driven by both hyperscale and enterprise markets. NVIDIA management disclosed that high-confidence purchase orders for Blackwell and Vera Rubin systems exceed $1 trillion, doubling the $500 billion figure announced at the GTC DC event in October 2025. Management also indicated that additional orders and backlog for 2027 are expected to accumulate over the next 6 to 9 months. The demand structure is diversified: approximately 60% comes from hyperscale cloud providers, whose internal AI consumption is shifting from recommendation/search workloads to large language models. The remaining roughly 40% is distributed among CUDA cloud-native AI enterprises, NVIDIA cloud partners, sovereign AI, and industrial/enterprise clients. Bank of America points out that this new $1 trillion outlook aligns closely with the prior Wall Street expectation of approximately $970 billion in data center revenue for the same three-year period, validating the logic in a similar way to how the previous $500 billion outlook from October 2025 validated expectations around $450 billion. Notably, NVIDIA management dedicated significant time at the conference to elaborating on the accelerating vector of traditional enterprise workloads. NVIDIA announced collaborations with IBM, Google Cloud, and Dell, among others, and introduced two new CUDA-X foundational libraries: cuDF and cuVS. J.P. Morgan believes this direction is "significantly underappreciated by the market." Their reasoning is that Moore's Law is plateauing, making domain-specific acceleration the only viable alternative path. This expands NVIDIA's addressable market beyond the AI training/inference cycle. Groq LPU Integration: The Most Significant Architectural Launch J.P. Morgan rated the integration of the Groq 3 LPU with Vera Rubin as the "most significant architectural-level new product launch" of this GTC. This decoupled inference architecture pairs the Rubin GPU, optimized for high throughput, with the Groq LPU, optimized for low-latency decoding. Pre-fill operations are completed on Vera Rubin, the attention decode portion also runs on Rubin, while the feed-forward network/token generation is offloaded to the Groq LPU. The LPX rack integrates 256 LPUs, providing 128GB of aggregate SRAM, 40 PB/s memory bandwidth, and 315 PFLOPS of inference compute power. It is scheduled for availability in the third quarter of 2026. NVIDIA management stated that for workloads requiring ultra-high token speed, approximately 25% of data center power will be allocated to LPX, with the remaining 75% for pure Vera Rubin NVL72 configurations. Bank of America data shows that combining the Rubin system with SRAM LPX racks can improve efficiency for high-end, low-latency workloads by up to 35x compared to the previous generation. J.P. Morgan notes this architecture directly addresses the fundamental conflict where a single processor cannot simultaneously optimize both throughput and latency, enabling NVIDIA to compete effectively in the high-end inference market, traditionally an ASIC vendor stronghold. Copper and CPO Advance in Parallel, No Single Bet on Interconnect NVIDIA management directly addressed the copper versus Co-Packaged Optics debate, confirming both pathways will be pursued concurrently. In the current Vera Rubin generation, the Oberon rack uses copper cabling for scaling to NVL72 and optical for scaling to NVL576. The Spectrum-6 SPX CPO Ethernet switch, co-developed by NVIDIA and TSMC, is in volume production, with management claiming 5x better power efficiency and 10x greater resilience compared to traditional pluggable transceivers. For Rubin Ultra, the Kyber rack will utilize copper NVLink scaling and also offer a CPO-based NVLink switching solution as an alternative. The Feynman platform will explicitly support both copper and CPO scaling and include the Spectrum-7 switch for scale-out. Bank of America emphasizes that adopting CPO scale-out switches is entirely optional for customers, who can continue using copper until they deem fit. J.P. Morgan views this dual-path confirmation as consistent with their prior expectations, anticipating copper will continue to dominate NVL72/NVL144 configurations until at least 2027, while CPO gradually gains share in scale-out and NVL576+ configurations. Vera CPU: A Standalone Multi-Billion Dollar Revenue Stream for Agent AI NVIDIA management explicitly stated that the standalone Vera CPU business is "already established as a multi-billion dollar business." Bank of America notes this revenue stream is not yet reflected in current market consensus expectations, representing an incremental contribution. The Vera CPU features 88 custom Olympus ARM cores. Its LPDDR5X memory subsystem delivers 1.2 TB/s bandwidth while consuming half the power of traditional server CPUs. It connects to GPUs via NVLink-C2C at 1.8 TB/s. A Vera CPU rack integrates 256 liquid-cooled CPUs, supporting over 22,500 concurrent CPU environments. Management emphasized that CPUs are becoming the bottleneck for scaling Agent AI. Reinforcement learning and agent workflows require massive CPU environments to test and validate the output of GPU models. Meta is already deploying the previous-generation Grace CPU at scale, with Vera slated for succession in 2027. J.P. Morgan characterizes this CPU revenue stream as high-margin, recurring, and structurally tied to the Agent AI adoption curve that NVIDIA is actively fostering. Product Roadmap Extends to 2028, Annual Architecture Cadence Strengthened NVIDIA reaffirmed its annual platform release cadence. Rubin Ultra will feature a 4-chip GPU configuration with 1TB of HBM4e and introduce the new LP35 LPU chip. The Kyber rack will support 144 GPUs per NVLink domain. Details disclosed for the Feynman platform exceeded market expectations. J.P. Morgan believes NVIDIA's vertically integrated platform, now spanning seven chip types, five rack systems, and supporting software stacks, is difficult to replicate. They conclude that accelerating inference demand, the structural expansion of the addressable market from traditional workload acceleration, and a continuously broadening customer base collectively support a more sustained AI capital expenditure cycle than the market currently anticipates.