AI-Driven Storage Evolution: Samsung's PIM Technology Nears Mass Production, Enabling Direct Computation Without CPU/GPU

Stock News
02/17

Artificial intelligence is fundamentally reshaping the supply and demand dynamics of the storage market with unprecedented force, simultaneously giving rise to a host of new technologies. Following the emergence of cutting-edge innovations like HBF and H³, a new direction is budding in the storage sector. According to media reports, Samsung Electronics plans to apply Processing-in-Memory (PIM) technology to LPDDR5X memory. Currently, Samsung is collaborating with key customers to develop LPDDR5X PIM technology, with samples expected to be available in the second half of this year. Furthermore, both parties are actively discussing specifications for applying PIM technology to the next-generation standard, LPDDR6.

PIM, which stands for Processing-in-Memory, integrates computing units (ALUs) directly at the memory bank level. Traditional methods typically require data to be transferred to the CPU or GPU for processing, whereas PIM performs computations directly within the memory, potentially overcoming the "memory wall." In a recent keynote speech at "SEMICON Korea 2026" held in South Korea, Sohn Kyung-min, head of Samsung's DRAM design team, emphasized the necessity of PIM technology, stating, "Currently, AI cannot fully utilize GPU performance due to insufficient memory bandwidth." In his view, PIM can not only significantly increase bandwidth but also greatly improve energy efficiency.

Samsung Electronics has already completed technical validation (PoC) for HBM-PIM and other related products and is now entering the commercialization phase, preparing for mass production. The core product for this technology is the LPDDR series, which is optimized for smartphones and endpoint AI devices. Besides Samsung, SK Hynix is also making strategic moves in PIM. At the "CES 2026" exhibition in the U.S. this year, it showcased several innovative products and technologies, including AiMX based on the PIM architecture.

Shanghai Securities pointed out that to accelerate AI deployment and drive information traffic growth, storage chips have evolved from standard components into core value products for the AI industry. In the future, through technological breakthroughs and ecosystem collaboration, competitive advantages in AI storage will be built. Zhongyou Securities noted that as a new computing architecture, the core of memory-compute integration is the complete fusion of storage and computation, adding computing capabilities to memory. This enables efficient two-dimensional and three-dimensional matrix calculations using a new operational architecture. Combined with advanced packaging and novel memory device technologies from the post-Moore era, it can effectively overcome the bottlenecks of the von Neumann architecture, achieving orders-of-magnitude improvements in computational energy efficiency.

PIM embeds computing units within memory chips, giving the memory itself certain computational capabilities. This makes it suitable for data-intensive tasks and can significantly enhance data processing efficiency and energy efficiency. CITIC Securities stated that analyzing current memory architectures for computing reveals that DRAM performance—specifically "bandwidth" and "capacity"—is the primary bottleneck. For training, larger models require higher memory capacity, while for inference, more concurrent users demand stronger bandwidth (training is more constrained by "capacity," inference by "bandwidth"), making upgrades urgently needed. The storage upgrades required in the AI era make memory-compute integration an inevitable long-term trend, with near-memory computing (PNM) serving as an effective current pathway.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10