🟦HBM4 to Arrive in 2026: Innovating AI Processing with Double I/O

企業分析

The next-generation memory “HBM4” will be introduced to the market in 2026, and it is expected to significantly improve the performance of AI chips and at the same time further intensify competition among major memory manufacturers.

image
https://www.trendforce.com/presscenter/news/20250522-12589.html

🟦HBM4 to come in 2026: Innovating AI processing with twice the I/O, SK Hynix is ahead of the competition

According to the latest comparative data released by TrendForce, HBM4 will arrive in 2026 and will have significantly better performance than the previous generation. Specifically, it is as follows.

  • Core die density: 24Gb (1.5x HBM3)
  • Layer configuration: 12 layers or 16 layers
  • Speed: 8~10Gbps
  • Number of I/Os: 2048 (twice as long)

With this evolution, HBM4 delivers twice the bandwidth at the same data rate, dramatically boosting the performance of GPUs and accelerators for AI. NVIDIA’s next-generation “Rubin” and AMD’s “MI400” series are also scheduled to be equipped with HBM4, and it is expected to be adopted in major AI chips by 2026.

🟦 Why HBM4 is Required: The Close Relationship Between AI and Memory

The rapid growth of generative AI is demanding an unprecedented amount of data processing from AI servers. In such an environment, there are more and more cases where conventional DRAM and HBM3 do not have enough bandwidth. Especially for large LLMs (large language models), the GPU’s ability to access memory quickly and reliably is the key to performance.

In addition to doubling the number of I/Os, HBM4 uses a logic-based base die structure to optimize coupling to the SoC and significantly reduce latency and errors. The overall HBM market is expected to exceed 30 billion gigabits by 2026, with HBM4 expected to surpass HBM3e as the flagship product in the second half of the year. SK Hynix, a major supplier, is expected to maintain a market share of more than 50% in HBM4.

🟦 Summary

HBM4 is becoming the core of next-generation AI infrastructure because it greatly exceeds conventional technologies in terms of performance, structure, and bandwidth, and has a high affinity with AI chips. In the second half of 2026, HBM4 will become the mainstay of the market, and the stage of competition has already shifted to the next generation.

For manufacturers that have focused on general-purpose DRAM such as DDR, it would have been unimaginable that the time would come when memory with ultra-high I/O and high-density designs like HBM would be required to this extent. Under such circumstances, SK Hynix’s stance of leading the development of HBM ahead of the industry should be highly evaluated in terms of both strategic vision and technological capabilities.

Copied title and URL