Alibaba Unveils Qwen3.5: Performance Rivals Gemini 3 at 1/18th the Token Cost

Deep News
Yesterday

Alibaba has open-sourced its latest large language model, Qwen3.5-Plus, positioning it as the world's most powerful open-source model with performance comparable to Gemini 3 Pro. The Qwen3.5 series represents a comprehensive overhaul of the underlying model architecture. The newly released Qwen3.5-Plus version features 397 billion total parameters but activates only 17 billion, enabling it to outperform the trillion-parameter Qwen3-Max model. This design reduces GPU memory usage for deployment by 60% and significantly improves inference efficiency, with maximum inference throughput increasing by up to 19 times. The API pricing for Qwen3.5-Plus is as low as ¥0.8 per million tokens, just 1/18th the cost of Gemini 3 Pro.

Unlike previous generations, Qwen3.5 marks a generational leap from a pure text model to a native multimodal model. While Qwen3 was pre-trained on text tokens only, Qwen3.5 is pre-trained on a mixture of visual and text tokens. The model incorporates significantly more data in Chinese, English, multiple languages, STEM subjects, and reasoning tasks. This allows the "sighted" large model to acquire denser world knowledge and reasoning logic, achieving top-tier performance comparable to the trillion-parameter Qwen3-Max base model with less than 40% of the parameters. It demonstrates excellent performance across comprehensive benchmark evaluations in reasoning, programming, and Agent capabilities.

In benchmark tests, Qwen3.5 scored 87.8 on the MMLU-Pro knowledge reasoning evaluation, surpassing GPT-5.2. It achieved 88.4 on the doctoral-level GPQA assessment, higher than Claude 4.5. The model set a new record of 76.5 on the instruction-following IFBench. In general Agent evaluation BFCL-V4 and search Agent evaluation Browsecomp, Qwen3.5's performance exceeded both Gemini 3 Pro and GPT-5.2.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10