Hong Kong-listed artificial intelligence concept stocks experienced a collective surge on February 12, driven by intensive new releases from domestic large language model companies during the Spring Festival period, which boosted the entire related sector.
Knowledge Atlas soared over 40% after the company announced an increase in the price of its AI programming subscription packages and officially launched its new flagship model, GLM-5, the previous day. Large model stock MiniMax WP surged over 21%. On the same day, MiniMax released its latest programming model, M2.5, claiming it as the world's first production-grade model natively designed for Agent scenarios. Both companies have positioned programming and intelligent agent capabilities as core areas for their upgrades.
The chip sector also moved higher in tandem. Daysky Intelligence, referred to as one of the "Four Little Dragons of Domestic GPUs," saw its gains expand to 25% in the afternoon session, while Biren Technology rose nearly 10%. Gigadevice increased over 8%. Rising demand for AI computing power is fueling expectations of benefits for related hardware manufacturers.
Behind this market rally is a concentrated wave of new product launches by domestic large model makers during the Spring Festival window. Following DeepSeek's recent model release, products such as Alibaba's Qwen 3.5 and ByteDance's SeeDance 2.0 have successively debuted, indicating that industry competition is intensifying.
The domestic large model track has entered a phase of密集 releases during the Spring Festival period. DeepSeek had previously launched a new model, and products like Alibaba's Qwen 3.5 and ByteDance's SeeDance 2.0 have recently appeared. The fact that multiple manufacturers have chosen this time to集中 launch new products shows that industry competition is heating up.
According to a previous report, the GLM-5 model launched by Knowledge Atlas on February 11 expanded its parameter scale from 355B in the previous generation to 744B, while activated parameters increased from 32B to 40B. The pre-training data volume grew from 23T to 28.5T. Knowledge Atlas confirmed that the mysterious model "Pony Alpha," which recently topped the popularity chart on the global model service platform OpenRouter, is indeed GLM-5.
This model首次 introduced the DeepSeek sparse attention mechanism, reducing deployment costs and improving token utilization efficiency while maintaining performance in long-text processing. Architecturally, GLM-5构建 78 hidden layers, integrated 256 expert modules, activating 8 at a time, with approximately 44B activated parameters and a sparsity of 5.9%. It supports a context window of up to 202K tokens. Internal evaluations indicate that GLM-5's average performance in programming development scenarios, such as front-end, back-end, and long-range tasks, has improved by over 20% compared to the previous generation, with real-world programming experience approaching the level of Claude Opus 4.5.
An AI practitioner in Shanghai分析 that while domestic large models previously competed mainly on lower prices, Knowledge Atlas's significant price increase this time indicates a clear improvement in the technical capabilities and market competitiveness of domestic models.
The MiniMax M2.5 is positioned as the world's first production-grade model natively designed for Agent scenarios, with its programming and intelligent agent performance directly对标 Claude Opus 4.6. The model supports full-stack programming development for PC, App, and cross-platform applications, demonstrating industry-leading capabilities particularly in core Office productivity scenarios like advanced Excel processing, in-depth research, and PowerPoint.
The M2.5 model has only 10B activated parameters, giving it significant advantages in memory usage and inference efficiency. It supports an ultra-high throughput of 100 TPS, with推理速度 far exceeding top international models. This release represents another rapid iteration for MiniMax, coming just over a month after the previous version 2.2.
The Spring Festival period in 2026 is no longer just a consumer狂欢 but has evolved into a race among China's AI giants for the "mobile entry point."
A recent research report from J.P. Morgan on February 11 pointed out that China's internet and AI industry is experiencing its most密集 flagship model release cycle in history. This is no longer a solo performance by a single model but a game of musical chairs about who can most quickly transform "technology spillover" into "consumer-grade hits."
ByteDance was the first to play its hand, offering a three-model "bundle": SeeDance 2.0 (video), SeeDream 5.0 (image), and Doubao 2.0. Among these, SeeDance 2.0 has already shown signs of being a potential hit.
Alibaba is also not holding back, with reports indicating preparations to launch Qwen 3.5 in mid-February, accompanied by a 3 billion yuan incentive plan to drive user acquisition.
DeepSeek's V4 version is reportedly targeting a mid-February release, with a focus on improving coding and ultra-long prompt processing. Reports on February 11 already indicated that DeepSeek had updated its model, supporting a context length of up to 1 million tokens.