Memory Chip Stocks Defy Market Downturn with Unexpected Rally

Deep News
Mar 28

A research paper from Google regarding a new algorithm has dealt a heavy blow to memory chip concept stocks. On Friday, amidst a broad sell-off across major US stock indices, US memory chip stocks rallied against the market trend. During the session, SanDisk surged over 5% at one point, and Micron Technology rose more than 3%. By the market close, SanDisk was up 2.10%, Micron Technology gained 0.50%, Seagate Technology advanced 0.34%, and Western Digital increased by 0.73%. This followed a significant sell-off experienced by these same stocks just the day before. At Thursday's close, SanDisk had plummeted over 11%, Seagate Technology fell more than 8%, Western Digital dropped over 7%, and Micron Technology declined nearly 7%.

Some analysts suggest that Thursday's sharp decline in memory chip stocks may have been due to a market misinterpretation. The ultra-efficient AI memory compression algorithm, TurboQuant, mentioned in the Google paper, only affects the Key-Value cache during the inference phase. It does not impact the High Bandwidth Memory consumed by model weights and is unrelated to AI training tasks.

Other analysts stated that advanced compression techniques merely alleviate bottlenecks and do not destroy demand for DRAM and flash memory. Investors might have used the Google news as an opportunity to take profits, but consumption demand for memory remains very strong. The short-term pullback in memory stocks represents a potential entry opportunity rather than a turning point for share prices.

Memory chip stocks were impacted by Google's new algorithm. Another unsettling AI-related narrative emerged as Google publicized research on a new algorithm that can significantly reduce memory usage, leading to substantial recent losses for memory chip stocks.

On Thursday, SanDrop plunged over 11%, Micron Technology fell nearly 7%, SK Hynix dropped more than 6%, Samsung Electronics declined almost 5%, and Kioxia was down nearly 6%. Estimates indicate that the combined market capitalization of the world's leading memory giants evaporated over $90 billion in a single day on Thursday. On Friday, however, memory chip concept stocks in the US market staged a rebound, with SanDisk gaining over 2% and Micron Technology edging up 0.50%.

In the preceding months, memory chip companies had shown strong performance. A surge in investment in AI infrastructure led to supply shortages, triggering a spike in chip prices and profit growth. As of Wednesday this week, shares of SK Hynix and Samsung Electronics had soared over 50% year-to-date, while Kioxia's stock price had more than doubled.

The catalyst for the sell-off was a research paper titled "TurboQuant" from Google Research, scheduled for formal presentation at the International Conference on Learning Representations. The Google team claimed that through two innovative techniques—PolarQuant and QJL—they achieved KV Cache compression to 3-bit precision with "zero loss," reducing memory usage by at least sixfold. The algorithm also reportedly delivered up to an eightfold performance improvement on H100 GPU accelerators compared to unquantized key-value caching.

Google promoted this research on platform X this week, although the research was initially released last year. Investors likely worried this could reduce demand for memory from hyperscale data center operators, thereby depressing prices for components also used in smartphones and consumer electronics.

Institutions suggest the market may have misinterpreted the news. Morgan Stanley, in a recent report, indicated a potential market misreading. The technology only affects the key-value cache during inference and does not impact the HBM used by model weights; it is also unrelated to AI training tasks. The analysts emphasized that the so-called "6x compression" does not equate to a reduction in total storage demand but rather increases throughput per GPU through efficiency gains.

Morgan Stanley analyst Shawn Kim noted that the impact of Google's research on the industry should be viewed more positively, as it addresses a key bottleneck. The technology improves the efficiency of the so-called key-value cache used for inference. He wrote, "If models can run with significantly lower memory requirements without performance loss, the service cost per query drops substantially, making AI deployment more profitable." Kim stated that, considering return on investment opportunities, TurboQuant is beneficial for hyperscalers. In the long term, it could also be favorable for memory manufacturers, as "lower per-token costs can drive higher product adoption demand."

Morgan Stanley cited the "Jevons paradox" from economics to explain the long-term impact: while technological efficiency gains reduce unit costs, they often lead to an overall expansion in demand as barriers to usage decrease.

KC Rajkumar, an analyst at Lynx Equity Strategies, pointed out that some media reports contained exaggerations. Current inference models already widely use 4-bit quantized data, and Google's claimed "8x performance improvement" is based on a comparison with older 32-bit models. "However, given extremely tight supply, this is unlikely to reduce demand for memory and flash storage over the next 3-5 years," Rajkumar wrote, adding that advanced compression techniques merely reduce bottlenecks and do not destroy demand for DRAM/flash memory.

Wells Fargo analyst Andrew Rocha noted that the existence of compression algorithms has never fundamentally altered the overall scale of hardware procurement. By drastically reducing the service cost per query, such technologies can enable models that were previously only viable on expensive cloud clusters to run locally, effectively lowering the barrier to large-scale AI deployment.

Four leading hyperscale companies, including Amazon and Google, plan to spend approximately $650 billion this year on data center construction, snapping up Nvidia's AI accelerators and related memory chips. Chey Tae-won, Chairman of SK Group, recently stated that the tight supply situation for memory chips is expected to persist until 2030.

From a supply chain perspective, server DRAM demand is forecast to grow 39% in 2026, with HBM demand projected to increase 58% annually. The optimization effects of TurboQuant are likely to be overshadowed by the wave of industry growth.

Jordan Klein, a technology specialist at Mizuho, believes the current pullback in memory stocks resembles more of a potential entry opportunity than a turning point for share prices. Klein wrote in a report that after strong gains in 2025 and early 2026, bulls in memory stocks have begun to waver. Although the memory industry is known for its severe cyclical swings, he emphasized that the recent selling fits a familiar pattern.

Mizuho indicated that such sell-offs occur every few months; this is not a signal of a market top nor a reason for selling. In fact, buying on dips has proven profitable.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10