Samsung Electronics is reportedly in discussions to supply Nvidia with its next-generation HBM4 chips, which could significantly enhance its market position in the competitive AI chip landscape.
Samsung Electronics appears to be on the verge of a significant partnership with Nvidia. The South Korean tech giant announced on Friday that it is engaged in “close discussions” to supply its next-generation high-bandwidth memory (HBM) chips, known as HBM4, to Nvidia. This move comes as Samsung strives to catch up with its competitors in the rapidly evolving AI chip market.
High Bandwidth Memory (HBM) chips are a specialized type of high-performance RAM designed to deliver exceptionally fast data transfer rates while consuming less power and occupying less physical space compared to traditional memory types like DDR. Unlike standard DRAM modules, which are typically laid out horizontally, HBM chips are stacked vertically in multiple layers and interconnected with through-silicon vias (TSVs). This unique architecture allows for rapid data transfer between layers and to the processor, making HBM an attractive option for high-performance applications.
HBM is widely utilized in graphics cards, AI accelerators, supercomputers, and data centers, where high bandwidth is essential for demanding tasks such as machine learning, 3D rendering, and scientific simulations. For instance, HBM2 and HBM3 can provide hundreds of gigabytes per second of bandwidth per stack, a significant improvement over the tens of gigabytes offered by conventional GDDR memory.
Samsung’s potential partnership with Nvidia comes at a time when local rival SK Hynix, currently Nvidia’s primary HBM supplier, has announced plans to begin shipping its latest HBM4 chips in the fourth quarter of this year, with an expansion of sales anticipated in 2026.
Nvidia’s reliance on High-Bandwidth Memory (HBM) is particularly pronounced for its high-end GPUs, which are predominantly used in AI and data-center workloads. HBM provides a much higher memory bandwidth per pin compared to traditional GDDR memory, allowing Nvidia GPUs to efficiently process large AI models while minimizing latency and power consumption. However, Nvidia does not manufacture HBM chips in-house; instead, it sources these critical components from suppliers like SK Hynix and Micron. This dependency on external suppliers gives them considerable influence over Nvidia’s operations, although the company is actively working to regain some control by planning to influence the logic-die design of HBM starting around 2027.
While Samsung has not disclosed a specific timeline for shipping its new HBM4 chips, it plans to market them next year. To mitigate potential supply risks, Nvidia has urged its suppliers to expedite the delivery of next-generation HBM4 chips, underscoring the urgency of securing high-bandwidth memory for AI advancements. As of 2025, HBM4 is in the sampling or early production stages, with mass production anticipated later in the year. Although HBM significantly enhances performance, its production is both costly and complex. Some industry analysts speculate that Nvidia may consider hybrid memory solutions that combine HBM with more affordable memory types like GDDR7, although this has yet to be officially confirmed.
Jeff Kim, head of research at KB Securities, noted that while HBM4 may require further testing, Samsung is generally viewed as being in a favorable position due to its production capabilities. “If Samsung supplies HBM4 chips to Nvidia, it could secure a significant market share that it was unable to achieve with previous HBM series products,” Kim stated.
The ongoing developments surrounding HBM4 supply for Nvidia highlight the increasing strategic importance of high-bandwidth memory in the AI and data-center GPU markets. As Nvidia continues to rely heavily on HBM for efficiently processing large AI models, securing a stable supply of next-generation memory is critical for maintaining its competitive edge. While SK Hynix remains a key supplier, a potential partnership with Samsung could introduce greater supply diversity, mitigate risks, and intensify competition among memory vendors.
In summary, while HBM offers substantial performance advantages, its production complexities and costs make supply management a vital aspect of Nvidia’s strategy. The involvement of multiple suppliers may also impact pricing, delivery schedules, and the broader AI chip ecosystem. Ultimately, the push for HBM4 underscores the pivotal role that high-performance memory plays in advancing AI hardware, shaping market dynamics, and determining which companies can sustain leadership in this fast-evolving sector.
Source: Original article

