"Father of HBM" States High-Bandwidth Flash (HBF) Commercialization Exceeds Expectations; Could Be Integrated into GPUs Within 2-3 Years, Market Size Predicted to Surpass HBM

Deep News
11小时前

The commercialization of High-Bandwidth Flash (HBF) is accelerating, with this new memory technology—seen as an "HBM version of NAND"—expected to be implemented sooner than anticipated. Kim Joungho, a professor at the Korea Advanced Institute of Science and Technology (KAIST) known as the "Father of HBM," recently revealed that Samsung Electronics and SanDisk plan to integrate HBF into products from NVIDIA, AMD, and Google by late 2027 to early 2028.

Kim Joungho pointed out that, thanks to the accumulated process and design experience from HBM, the commercialization timeline for HBF will be significantly faster than the original development cycle for HBM. He predicts HBF will see widespread adoption during the rollout phase of HBM6 and estimates that by around 2038, its market size has the potential to exceed that of HBM.

The relentless growth of AI workloads is a key driver propelling HBF development. Compared to traditional DRAM-based HBM, HBF vertically stacks NAND flash memory, offering approximately 10 times the capacity while maintaining high bandwidth, making it particularly suitable for high-capacity scenarios like AI inference. Currently, Samsung Electronics and SK Hynix have signed memorandums of understanding with SanDisk to jointly advance HBF standardization, targeting product market launch by 2027.

HBF Technical Advantages: Balancing Capacity and Bandwidth HBF employs a vertical stacking architecture similar to HBM, but it stacks NAND flash chips instead of DRAM chips—a crucial difference that brings significant capacity gains. According to industry analysis, HBF bandwidth can exceed 1,638 GB/s, far surpassing the approximately 7 GB/s bandwidth of NVMe PCIe 4.0 SSDs; its capacity is projected to reach 512GB, significantly exceeding the 64GB upper limit of HBM4.

Kim Joungho further elaborated on HBF's role in AI workflows: currently, when GPUs perform AI inference, they need to read variable data from HBM; in the future, this task could be handled by HBF. Although HBM is faster, HBF can provide roughly 10 times the capacity of HBM, making it more suitable for large-capacity data processing scenarios.

Regarding technical limitations, Kim Joungho noted that HBF supports unlimited read operations but has limited write endurance (approximately 100,000 cycles), which necessitates that companies like OpenAI and Google design software architectures optimized for read-centric operations. He offered a vivid analogy:

"If HBM is likened to a home bookshelf, HBF is like going to a library to study—slightly slower in speed, but with a vastly larger repository of knowledge available for access."

Industry Deployment: Memory Giants Accelerate Efforts SK Hynix is expected to release a trial version of HBF and conduct technical demonstrations later this month. Previously, Samsung Electronics and SK Hynix signed MoUs with SanDisk to jointly establish an alliance aimed at advancing HBF standardization. Currently, both companies are actively developing related products.

According to TrendForce, SanDisk took the lead in releasing an HBF prototype in February 2025 and established a technical advisory committee. In August of the same year, the company signed an MoU with SK Hynix aimed at promoting specification standardization, planning to deliver engineering samples in the second half of 2026 and achieve commercialization in early 2027. Samsung Electronics has already initiated the conceptual design phase for its own HBF product.

The technical implementation of HBF primarily relies on Through-Silicon Via (TSV) technology to vertically stack multiple layers of NAND chips, utilizing advanced 3D stacking architectures and chip-to-wafer bonding processes. Each package can stack up to 16 NAND chips, supports multi-array parallel access, and achieves bandwidth ranging from 1.6 TB/s to 3.2 TB/s, on par with HBM3 performance. The maximum capacity per stack is 512GB; with an 8-stack configuration, the total capacity can reach 4TB, equivalent to 8 to 16 times the capacity of HBM.

Future Architecture: From HBM6 to the "Memory Factory" Kim Joungho predicts that HBF will achieve widespread application during the promotion phase of HBM6. He indicated that in the HBM6 era, systems will no longer rely on a single stack but will form interconnected "memory clusters," analogous to the construction logic of a modern residential complex. DRAM-based HBM faces significant capacity limitations, while NAND-stacked HBF will effectively fill this gap.

Regarding the evolution of system architecture, Kim Joungho proposed a more streamlined data pathway concept. Currently, GPUs fetching data must navigate a complex transmission process involving storage networks, data processors, and GPU pipelines. In the future, data could be processed directly near the compute unit, following HBM. This architecture, referred to as the "Memory Factory," is expected to emerge during the HBM7 phase, significantly enhancing data processing efficiency.

HBF will be co-located with HBM in the future, deployed around AI accelerators like GPUs. Kim Joungho stated, "I believe that within 2 to 3 years, the term HBF will become a household name." He further noted that thereafter, HBF will enter a period of rapid development and gradually assume the core role of back-end data storage.

Looking at the long-term market, Kim Joungho forecasts that by around 2038, the market size of HBF is expected to surpass that of HBM. This judgment is based on the sustained demand for high-capacity storage in AI inference scenarios and the inherent advantage of NAND flash over DRAM in terms of storage density. However, constrained by NAND's physical characteristics, HBF has higher latency than DRAM, making it more suitable for read-intensive AI inference tasks rather than latency-extremely-sensitive applications.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10