- Samsung’s chip division posted a 49-fold profit jump in Q1 2026—its best quarter in years.
- Every AI company wants more HBM memory than exists, and the supply gap is getting worse.
- Samsung, long an also-ran in advanced chips vs. SK Hynix, now finds itself holding all the cards.
Samsung Electronics reported operating profit of 8.3 trillion won (roughly $6.4 billion) for Q1 2026 on Thursday—an 8.3-fold increase from the same period last year and a record for the Korean giant’s semiconductor division. The number that matters most, though, is the 49-fold jump in chip income specifically. That’s not a typo. The AI boom has turned Samsung’s memory business into a cash machine at precisely the moment the company needed it most.
The driver is HBM—High Bandwidth Memory—the specialized chips stacked inside every AI GPU from Nvidia’s H100 to AMD’s MI300X. HBM is the “picks and shovels” of the AI gold rush, and right now the picks-and-shovels supply is nowhere close to meeting demand. Samsung reported it has sold out its entire HBM allocation for the foreseeable future, even as AI hyperscalers place orders 12 to 18 months in advance. The company expects the supply crunch to deepen through 2027.
“The memory market is in a structural undersupply situation that we’re not going to solve in the next 18 months,” one Samsung executive told CNET, speaking on condition of anonymity because the comment wasn’t authorized for public disclosure. That aligns with what research firm Omdia found when it revised its 2026 semiconductor forecast upward by 62.7% last week—a historic jump driven almost entirely by AI-related DRAM and NAND demand. In other words: the shortage isn’t a glitch. It’s the new normal.
Why HBM Is the New USB-C of AI Infrastructure
HBM sounds arcane, but the concept is straightforward. Traditional system-on-chip memory sits beside the processor; HBM stacks memory chips directly on top of the GPU die, dramatically reducing the distance data has to travel. For a chip running billions of calculations per second to train an AI model, that matters—a lot. The bandwidth advantage over conventional GDDR memory is roughly 10x, which is why no serious AI training cluster uses anything else.
Three companies make HBM: Samsung, SK Hynix, and Micron. SK Hynix has traditionally led in AI-grade HBM4, but Samsung is now the only manufacturer with a full product range across all HBM tiers—meaning it can serve both cutting-edge AI training chips and the slightly less demanding inference market simultaneously. Samsung shipped 65.4 million smartphones in Q1, up 8% year-over-year, but the real story is that its chip business has finally found a category where it has pricing power it hasn’t had in years.
The competitive dynamics are shifting in Samsung’s favor in another way: TSMC, which fabricates chips for Nvidia and Apple, has been grappling with its own capacity constraints in advanced packaging—particularly CoWoS, the technology that enables HBM integration with compute dies. Yahoo Finance reported that Big Tech’s combined AI capex topped $650 billion in Q1 2026, with Microsoft, Meta, and Amazon all raising their full-year spending guidance. All that money flows, in part, through Samsung’s memory division. When you spend $145 billion on AI infrastructure like Meta just announced, a meaningful chunk of it ends up paying for DRAM and HBM—and Samsung is one of only three places to get it.
The Turnaround Samsung Needed
Samsung’s semiconductor division has spent three years in the wilderness. While Nvidia’s data center revenue exploded from $3.8 billion to over $80 billion, Samsung’s memory business was stuck fighting commodity price wars in a market where HBM was a rounding error. The company lost share in premium memory to SK Hynix, struggled with yields on its advanced nodes, and watched investors pile into TSMC instead. Thursday’s numbers mark the sharpest reversal in that trend since the AI era began.
The risk is that this looks too good. When a company posts a 49x jump in chip income, every competitor, investor, and customer starts paying attention. SK Hynix announced plans to expand its HBM capacity by 40% in February. Micron is funneling every spare dollar into HBM4 R&D. And the AI hyperscalers—Amazon, Google, Microsoft—are all developing proprietary memory solutions to reduce their dependence on any single supplier. Samsung’s window of leverage may be wide open, but it’s not going to stay that way forever. Supply constraints that persist for 18 months tend to attract capital that closes them in 12.
Samsung’s Q1 earnings also landed the same week as Omdia’s revised semiconductor sector forecast, which now projects AI-driven revenue growth of 62.7% for the full year—numbers that would have seemed fantastical 18 months ago. Whether that forecast holds depends entirely on whether the AI buildout continues at its current pace. If it does, Samsung just posted the first evidence that its turnaround is real, not a temporary bounce from cyclically low memory prices. If it doesn’t, the company has quietly locked in enough long-term supply deals to survive a pullback. Samsung, for once, is playing it safe on both sides.
