China's Qwen AI Model Gains Popularity in the U.S., Pressuring American Tech Firms

Deep News
Yesterday

On November 17, Alibaba officially launched the public beta version of its "Qwen APP," offering free access to both web and PC versions. The app, based on Alibaba's self-developed large language model "Qwen" (short for "Qianwen"), targets consumer users, positioning itself as a direct competitor to ChatGPT.

The ripple effects of Qwen APP's release were felt days earlier. On November 12, former Google CEO Eric Schmidt highlighted in a podcast the unusual dynamic where "the largest U.S. models are closed-source, while China's largest models are open-source—freely accessible versus paid." His remarks sparked widespread discussion in Silicon Valley, with industry insiders speculating about upcoming developments. A day later, Bloomberg reported on Alibaba's Qwen initiative, revealing the imminent launch of Qwen APP.

Bloomberg data shows that Qwen's downloads have surpassed Meta's Llama, making it one of the most popular open-source large language models globally. Researcher Matt Stoller warned that this reflects a strategic misstep by Silicon Valley, which has overcommitted to closed, compute-intensive models.

Industry experts note that while the debate centers on an app, it underscores a broader global contest over AI development philosophies—open versus closed systems. The preference for Chinese AI models among U.S. companies is becoming evident. Airbnb CEO Brian Chesky openly stated that his company relies heavily on Qwen, finding it "better and cheaper" than OpenAI's offerings.

This trend isn't new. A Stanford University AI Index Report released on April 10 noted that the performance gap between top U.S. and Chinese AI models has narrowed to just 0.3%, with Qwen ranking third globally in contributions. Nvidia CEO Jensen Huang also praised Qwen and DeepSeek as among the best open-source AI models during an earnings call on May 29, citing their widespread adoption in the U.S., Europe, and beyond.

Qwen's success in enterprise applications has now extended to consumer-facing products like Qwen APP, democratizing access to cutting-edge AI. As Chinese Academy of Engineering member Zheng Weimin emphasized, Qwen's achievements highlight the importance of developing competitive AI models that can elevate China's global standing in the field.

Open-source models like Qwen, which publish code and weights for free use and modification, foster collaboration and innovation, offering cost-effective solutions for developers and businesses worldwide. Alibaba's approach has empowered startups, researchers, and tech enthusiasts to explore AI affordably, prompting U.S. rivals like OpenAI and xAI to release their own open-source models.

To date, Qwen has open-sourced over 300 models across modalities and sizes, serving more than 1 million global enterprises, including the International Olympic Committee, BMW, LVMH, L'Oréal, Siemens, Starbucks, and Marriott. OpenAI CEO Sam Altman even conceded that his company has "been on the wrong side of history" regarding open-source AI.

These developments underscore the growing appeal of open technology. Chinese researchers and tech firms remain committed to advancing AI for the benefit of humanity, grounded in practical innovation.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10