devxlogo

We’re Funding AI By Taxing Memory

AI runs on memory, and right now memory is running us. I argue that high-bandwidth memory has shifted the balance of power in tech. Consumers are footing the bill while one supplier sits at the center of the boom.

At the heart of this story is SK Hynix. The company nearly failed in 2012. Today it anchors the supply of the most prized memory on Earth: HBM. This is not just market momentum. It is market control.

The Bet That Rewired AI

The old playbook for memory was simple: shrink cells, pack more bits, go faster. That hit a wall once AI training began to move data at extreme rates. As the engineer put it:

“The bottleneck isn’t compute anymore. It’s the memory that feeds it… DDR5 hits that wall catastrophically.”

SK Hynix saw the problem early. With AMD, it moved memory next to the processor and stacked it sky-high. That design, high-bandwidth memory, trades easy bits for extreme throughput. It is elegant on paper and brutal in a fab.

HBM stacks up to 12 dies today and 16 with HBM4. Each extra layer slashes yield. Even at 90% per-die yield, a 12-high stack drops under 70%. HBM eats silicon and time, then asks for more.

The Cost You Feel, The Control You Don’t See

Here is the rub. The same suppliers make HBM and standard DRAM on the same tools. Each shift to HBM pulls capacity away from laptops and phones. That is why prices have spiked. The engineer put a hard number on it:

“Memory prices are up 638% year over year.”

Every NVIDIA GPU ships with several HBM stacks. One Rubin-class rack can hold memory equal to 30,000 smartphones. Demand is not just high. It is devouring supply.

  • HBM3E hits about 1.2 TB/s per stack; HBM4 aims above 2 TB/s.
  • NVIDIA’s Rubin and AMD’s MI400 both depend on HBM4.
  • SK Hynix reportedly locked ~70% of NVIDIA’s HBM4 orders.
  • One megaproject could draw up to 40% of global DRAM output.
  • New Hynix capacity targets: 60,000 wafers per month at M15X.
See also  Ben Affleck Downplays AI’s Creative Reach

These figures explain the “invisible tax” on your next device. We are subsidizing AI infrastructure every time we upgrade.

How One Company Took the Wheel

When NVIDIA and AMD went shopping, Samsung struggled with yields. Micron was not ready. SK Hynix picked up the phone with working HBM at scale. Since then, it has poured money into massive sites: M15X in Cheongju and a $410 billion complex in Yongin with four fabs. That is a bet on staying essential, not just competitive.

Inside, tools like TC bonders stack and fuse 16 layers with hair-thin tolerances. They cost tens of millions each, and fabs need them by the hundred. Supply will rise, but never cheaply.

The Counterpoint—and Why It Falls Short

Some argue the crunch will fade as capacity ramps. Yes, Samsung rebuilt its HBM process and looks ready for HBM4. Micron is gaining ground, backed by U.S. support. And every major supplier is racing to new nodes. SK Hynix and Samsung are shifting from 1β to 1γ, promising about 30% more bits per wafer.

But capacity takes years, and node shifts mean months of weak yields before recovery. New fabs will not hit meaningful output until late 2027. The demand curve is not leveling. It is steepening as thousands of AI data centers break ground. Relief will come late and likely at a higher price floor.

Who Wins While We Pay

SK Hynix is the clear winner on HBM. But there is an ironic second winner: Samsung. While rivals shifted to HBM, it kept churning DDR5. With DRAM tight, margins there ballooned. Two winners, for different reasons. Everyone else pays more for memory.

See also  Citgo Posts Quarterly Loss As Margins Squeeze

I do not see this ending before 2028. Even then, prices may not snap back. The upfront spend, the tooling limits, and the AI buildout set a new normal.

What We Should Demand

We need transparency on supply constraints and pricing. Large buyers should publish capacity commitments. Policymakers should support diversified memory supply, not just domestic fabs that arrive late. Cloud buyers must weigh HBM use against consumer access. And as users, we should delay upgrades that only add small gains, vote with wallets, and press vendors on memory costs.

The AI era should not price out the devices that let us use it. If we keep ignoring the memory math, we will keep paying this silent tax—without a say in how high it goes.


Frequently Asked Questions

Q: What is HBM and why does it matter?

High-bandwidth memory stacks multiple memory dies near the processor. It moves data far faster than traditional DRAM, which AI training needs to run efficiently.

Q: Why are consumer memory prices rising so fast?

Suppliers are shifting tools and wafers to HBM. That cuts output for laptops and phones. With demand spiking, prices rise across both HBM and standard DRAM.

Q: Can Samsung or Micron break SK Hynix’s lead?

Yes, especially with HBM4. Samsung’s revamped process looks competitive, and Micron is pushing hard. A yield or qualification win could narrow the gap quickly.

Q: When could supply catch up with demand?

New fabs likely deliver meaningful output late 2027. Node migrations help sooner, but yields dip before they improve. A full reset may not arrive before 2028.

See also  Musk Clashes With Ryanair Over Starlink

Q: What can buyers do right now?

Delay non-urgent upgrades, choose devices with better memory value, and push vendors for clarity on pricing. Large cloud buyers should publish capacity plans to reduce shocks.

joe_rothwell
Journalist at DevX

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.