Inside Korea's HBM dominance: why SK Hynix and Samsung define the AI memory era
A three-year capex cycle, a tightly-held patent lattice, and an 18-month lead on 12-Hi packaging have left the rest of the world racing to catch Korea's two memory giants.

When NVIDIA's H200 began shipping in volume in 2024, the hidden story was not the GPU die. It was the stack of HBM3e memory bonded beside it — and the fact that more than 90% of that memory, by value, was produced by two companies headquartered within forty kilometres of each other in the Korean province of Gyeonggi.
Nathan Research Group has tracked the high-bandwidth memory supply chain since 2019. What began as a niche product for networking silicon has become the single highest-margin category in DRAM. The question no longer is whether SK Hynix and Samsung will dominate HBM — it is how long that dominance can hold.
Korean share of HBM revenue, Q4 2025
91%
SK Hynix 52% · Samsung 39% · Micron 9%
Why the lead widened, not narrowed
In 2021 Nathan's semiconductor practice briefed a sovereign-wealth client that HBM would become a duopoly by 2024. The view was contrarian at the time. Micron had just announced HBM3 ambitions; Chinese fabs were reported to be six quarters behind. The thesis rested on three proprietary observations.
- SK Hynix had front-loaded a $14B capex cycle into TSV and MR-MUF packaging capacity — tooling that cannot be rerouted from commodity DRAM lines.
- Samsung's 1a-node DRAM yield curve, tracked weekly via fab-exit shipments, was climbing faster than public filings implied.
- Korea's domestic glass-substrate and molding-compound suppliers — LX Semicon, Soulbrain, Hana Materials — had locked multi-year allocations with both giants, starving Micron of a comparable ecosystem.
“HBM is not a chip, it is a packaging problem. The people who solved packaging first bought themselves a decade.”
— Former VP, SK Hynix DRAM R&D, interviewed March 2026
The 2027 window
The lead is not permanent. Micron's Taichung expansion, once stabilised, could close the gap on HBM3e by mid-2027. Chinese memory — CXMT in particular — is reportedly sampling a four-layer HBM2 equivalent, two full generations behind but moving. What matters for Korean incumbents is whether HBM4, scheduled for 2026 production, can ship with 16-Hi stacks at the yields both firms have privately committed to hyperscaler customers.
Our base case has SK Hynix and Samsung retaining >80% revenue share through 2028. The bear case — a 12-month HBM4 yield stumble coinciding with a Micron packaging breakthrough — sits at roughly 18% probability in our latest scenario model. Either way, the AI memory era, for now, is a Korean story.