SEOUL, January 26 (AJP) - Samsung Electronics has bolted out in the high-bandwidth memory (HBM) race as it readies to roll out sixth-generation high-bandwidth memory, HBM4, next month bound for U.S.-based fabless chip leaders Nvidia and AMD.
The Korean tech giant has completed final qualification processes with major AI accelerator customers, clearing the transition from sampling to mass production ahead of next-generation chip launches expected later this year.
Industry sources said all major memory makers, including SK hynix and Micron, recently resubmitted HBM4 samples in response to Nvidia’s tightened specifications for its upcoming Rubin platform. Samsung is understood to be the first supplier approved for an HBM4 prototype fabricated using its 1c-nanometer DRAM process.
An official familiar with the matter said the current stage has moved beyond initial sampling, noting that sample shipments had already been provided to customers toward the end of last year.
“Because this involves customer-specific products, it is difficult to confirm detailed internal processes,” the official said. “However, it is accurate to view the current phase as the step following sample shipments.”
The official added that mass supply timing is closely linked to customers’ product readiness and launch schedules, but said it would be reasonable to view shipments as occurring within the first half of the year.
According to TrendForce, HBM4 is expected to cost at least 30 percent more than current HBM3E products due to the complexity of its new architecture and packaging requirements.
Despite the higher price, demand is accelerating. The global HBM market is estimated at around $35 billion in 2025 and is projected to expand to between $52 billion and $61 billion in 2026. Some industry forecasts point to annual sales exceeding $85 billion by 2027 as AI workloads scale rapidly across data centers.
The expansion is being driven by surging memory demand from advanced AI accelerators. Nvidia’s Rubin platform is expected to integrate eight stacks of HBM4, delivering total memory capacity of about 288 gigabytes — more than triple that of earlier-generation products. AMD’s next-generation Instinct MI450 accelerator is also expected to adopt HBM4, with memory capacity projected to reach up to 432 gigabytes.
Analysts expect the trend to accelerate further, with higher-end Rubin variants likely to adopt 16-high stacking, pushing total memory capacity per processor toward the one-terabyte range.
The technological leap from HBM3E to HBM4 represents more than incremental performance gains. HBM4 doubles the input-output interface from 1,024 bits to 2,048 bits, significantly widening the data pathway between processors and memory and addressing what the industry describes as the “memory wall.”
While GPU compute performance has continued to scale rapidly, memory bandwidth constraints have increasingly reduced effective utilization rates, particularly during large-scale AI training and inference tasks.
Kim Deok-gi, a professor of electronic engineering at Sejong University, said this bottleneck has become more pronounced as AI models require the handling of increasingly large and diverse data sets.
“Even though GPU and CPU performance has improved significantly, the time required to move data back and forth has emerged as a limiting factor,” Kim said. “High-bandwidth memory effectively widens the data highway, allowing large volumes of information to be transferred simultaneously.”
As AI applications expand toward agent-based systems and physical AI such as autonomous driving, the role of memory has become more central, he added.
“In AI systems, computation remains important, but data must first be stored, retrieved and delivered at high speed,” Kim said. “That is why HBM has become a critical component in modern AI servers.”
HBM4 also introduces architectural changes at the base-die level. Unlike earlier generations produced largely with memory processes, the logic die at the bottom of HBM stacks is increasingly manufactured using advanced foundry nodes, enabling improved power management, error correction and internal data handling.
Industry observers say the shift reinforces a broader transition toward memory-centric AI systems, in which data movement between GPUs and high-bandwidth memory plays a more decisive role than traditional server CPUs.
Samsung’s early HBM4 rollout is underpinned by its vertically integrated manufacturing structure as an integrated device manufacturer (IDM). The company controls DRAM production, logic-die manufacturing through its foundry operations, and advanced packaging technologies within a single supply chain.
Such integration allows tighter performance optimization and shorter development cycles, particularly as AI chipmakers increasingly request customized memory specifications aligned with their accelerator roadmaps.
Competition in the HBM market is expected to intensify as rivals prepare their own next-generation products.
SK hynix said it is already operating HBM4 production in line with customer schedules.
“With regard to HBM4, you may consider that we are already in mass production,” the company said, adding that specific shipment timing ultimately depends on customer roadmaps.
Industry officials note that early commercial availability is becoming a critical differentiator as AI accelerator launch schedules grow more tightly synchronized with memory supply.
The HBM4 timeline is expected to be closely watched during earnings conference calls by Samsung Electronics and SK hynix, scheduled an hour apart Thursday following their fourth-quarter and full-year 2025 results.
Copyright ⓒ Aju Press All rights reserved.