Evaluating AI Investment Returns: Key Questions and Responses from Tencent and Alibaba's Earnings Calls

Deep News
May 14

On May 13, Alibaba and Tencent released their quarterly financial results on the same day (Tencent's Q1 FY2026, Alibaba's Q4 FY2026). Alibaba reported revenue of 243.4 billion yuan for the quarter, representing 11% growth on a comparable basis; Tencent's revenue reached 196.5 billion yuan, a year-on-year increase of 9%. While the revenue figures remained solid, the focus of investors during the evening earnings calls was almost entirely concentrated on one question: how to calculate the return on investment for AI? This highlights the deepest anxiety within the current AI industry. On one hand, GPU costs have doubled over two years due to supply bottlenecks, making investment unavoidable. On the other hand, the consumer subscription model is widely considered in China as "unlikely to become a large market," while enterprise willingness to pay for SaaS has remained low for a long time. Computing power investment resembles a bottomless pit—who will ultimately foot the bill for this intelligence?

What are investors most concerned about? Despite the vast scale and differing business structures of Alibaba and Tencent, the topics of greatest concern to investors that evening were highly aligned. Faced with the reality of Alibaba's free cash flow turning negative this quarter, with a net outflow of 17.3 billion yuan, and Tencent's capital expenditure increasing 16% year-on-year to 31.9 billion yuan, analysts from institutions such as UBS and J.P. Morgan repeatedly pressed: How is the ROI of massive computing power investments being measured? What framework is used to balance current capital expenditure with future returns?

The response from Alibaba Cloud's management leaned more towards a narrative of certainty akin to "heavy-asset infrastructure." They compared AI data center construction to "AI training factories" and "AI inference factories," arguing that the scale of these two core facilities determines future revenue scale. They stated that currently, there is hardly an idle GPU in Alibaba Cloud's servers, and demand certainty over the next 3 to 5 years is extremely high, making the investment return "very certain."

Tencent's statements were more cautious. The core argument from management was "investment portfolio management": AI investments include both short-cycle and long-cycle types. Investing in advertising technology yields quick, short-term returns; investing in foundational model training requires a longer-term perspective and should not be judged by short-term financial returns.

The capital expenditure for AI infrastructure is also crucial, and here the pace and strategies of Alibaba and Tencent show differences. Alibaba Cloud's management provided a specific reference point: to meet revenue targets over the next five years, the required data center construction scale would need to be at least 10 times larger than that of 2022 (before the large model boom). The previously mentioned "380 billion over three years" is already insufficient to cover future investment needs. Regarding funding, Alibaba will employ multiple financing avenues: using its own CAPEX for construction, obtaining resources through OPEX leasing, and collaborating with data center service providers through sales of its T-Head AI servers.

Tencent explicitly confirmed the fact of GPU shortages in Tencent Cloud, stating that as of Q1 2026, it "still did not have enough GPUs to begin serving external demand." Current GPUs are prioritized for internal flagship businesses—foundational models, Yuanbao, advertising, gaming, Workbuddy, and Codebuddy, among others. Tencent's Q1 report showed confirmed capital expenditure of 31.9 billion yuan, up 16% year-on-year, but cash payments amounted to 37 billion yuan, categorized as "prepayments and awaiting goods." To address this bottleneck, Tencent outlined a multi-layered chip supply supplementation plan. First, accelerating domestic substitution; James Mitchell provided a clear timeline for improvement—more domestically produced AI chips will be deployed month by month in the second half of this year, with significant growth expected especially later in the year. Second, diversifying procurement channels; besides increasing purchases of domestic GPUs, new computing power sources will include leasing external computing power and procuring resumed supplies of high-end imported GPUs. Third, long-term supply chain management; James Mitchell particularly emphasized that Tencent maintains long-term cooperative relationships with suppliers like Intel and AMD, who are willing to sign long-term agreements with partners to ensure predictable revenue.

Regarding AI commercialization and the feasibility and potential of AI subscriptions in the Chinese market, Alibaba and Tencent share considerable consensus. Alibaba Cloud directly cited overseas comparisons: subscription prices in Western markets are several times higher than for equivalent services in China, and the paid penetration rate for music and video services in China remains in the single digits. Therefore, the subscription model in China "will not be that large." Simultaneously, Alibaba foresees that AI services are inherently not a winner-takes-all model—every user call incurs variable costs, which dictates that the market will be shared by multiple players.

Tencent's statement was more succinct: "When you have to charge to support a service, that service is unlikely to be a winner-takes-all business." Both companies clarified their approach to monetization through indirect models like advertising and e-commerce but acknowledged that this requires a longer cultivation period.

From Chatbot to Agent: How Alibaba and Tencent Are Betting on AI's Next Phase The aforementioned differences and choices precisely outline several core competitive dimensions in the AI industry. The scale and autonomy of computing power infrastructure may be the most critical competitive dimension at present. Alibaba demonstrates notable full-stack capability in this dimension. Its T-Head self-developed GPU chips have achieved mass production, with over 60% of the computing power serving external commercial clients, covering core scenarios like internet finance and autonomous driving. Alibaba Cloud management also disclosed for the first time that its AI inference business, due to continuous optimization of single-card Token throughput and pricing traction effects, is driving a "relatively significant improvement" in overall gross margin.

Tencent is also accelerating the deployment pace of its self-developed ASIC chips, emphasizing that "capital expenditure will increase significantly in the second half of this year as more China-designed ASICs become ready month by month." However, it places greater emphasis on the supply chain resilience derived from long-term partnerships, including deep accumulated expertise with Intel and AMD in CPUs and networking chips.

In commercialization, AI model capability and ecosystem breadth play major roles. Based on the disclosed revenue structures, Alibaba Cloud segments its AI business into "Bailian MaaS API services" and "AI-native software subscription revenue," with self-developed models constituting the vast majority of the revenue scale. Looking ahead, Alibaba Cloud expects that "model and application services, including the Bailian MaaS platform, will see annualized recurring revenue exceed 10 billion yuan in the June quarter and surpass 30 billion yuan by year-end."

Tencent emphasizes its unique ecosystem advantage. Through communication interfaces like WeChat, QQ, and WeCom, users can control AI Agents, and Mini Programs will be integrated into the Agent system as "AI skills" in the future, forming a closed loop for traffic circulation within the entire ecosystem.

Of course, both Alibaba and Tencent acknowledge that willingness to pay in the enterprise market is currently much stronger than in the consumer market. But Tencent's statement goes a step further—the "infinite scaling" logic of the internet era is no longer applicable in the AI era because every delivery corresponds to actual computational costs. Therefore, "the ability to find high-value users will be as important as, or even more important than, blindly acquiring a large number of users, DAU, and user time."

This forms an intriguing contrast with recent moves by ByteDance's Doubao. As China's top AI-native application by monthly active users, Doubao officially launched a paid subscription plan in May 2026 (Standard at 68 yuan/month, Enhanced at 200 yuan, Professional at 500 yuan). Doubao's large model daily Token processing volume has exceeded 120 trillion, doubling over the past three months. Liang Rubo positioned Doubao as "integrating existing businesses through AI assistants," with Douyin E-commerce being the largest component, essentially using AI to reshape ByteDance's consumer business closed loop.

More notably, within the same week in May, five major domestic model makers took three different pricing actions: Doubao charging fees, DeepSeek drastically reducing prices, and Zhipu, Alibaba, Tencent, and others raising prices. This in itself represents a分流 (shunt/divergence) based on respective strengths. Fundamental differences have already emerged within the large model industry regarding the answer to "how to make money," and this earnings season merely inscribed these divergences between the lines of official documents.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10