An Investor’s Guide to HBM (High-Bandwidth Memory) For AI

An AI illustrated robot in a data center

highlights

There are many types of memory chip products, but HBM is currently highly favored for AI applications.
HBM involves a complex manufacturing process involving lots of different companies besides the primary memory chip IDMs like SK hynix, Micron, and Samsung.
With accelerated computing system demand from Nvidia, AMD, and others on the rise, HBM sales will grow — but there are a lot of ways investors can approach investing in these chips.

RELATED VIDEO

How to Invest in Semiconductor Stocks 2025 Memory Chip Manufacturers IDMs

Micron (MU) has an earnings update this week. Ahead of the report – let’s call it an early kick-off to Q2 2025 earnings season – let’s review what HBM is, and why it matters.

The memory chip market in 2025

In 2024, memory sales notched a rapid recovery from the bear market of 2022-23. Memory chip IDMs (integrated device manufacturers, companies like Micron that both design and manufacture semiconductors) grew their global sales roughly 80% in 2024 compared to 2023. This was a big percentage increase in sales because memory chip demand was recovering from the bear market of 2022-23. 

In 2025, memory chip sales are expected to slow to a more modest low- to mid-teens percentage growth rate versus 2024, more in-line with average industry growth expectations. That looks like a banner year, but bear in mind this was due to the cyclical recovery from the prior memory market downturn during the bear market that started in 2022. Still, at an expected ~$185 billion in sales this year, memory chips should make up about one-quarter of total semiconductor end-market revenue this year.

Chip Stock Investor's "Semiconductor Industry Flow" chart illustrating the electronics manufacturing supply chain.

Early indications from these memory IDMs are that the sales growth party could continue into 2026. The driving force behind it all? AI, of course. Device makers – everything from PCs to smartphones to data centers – want to pile into the “AI” thing. And AI requires a lot more memory, leading to the expectation for per-bit of memory growth to continue at a robust pace for years (this does not directly correspond to actual memory chipmaker revenue, as per-bit memory chip cost is expected to continue falling). 

Nowhere is this growth more dramatic than in the AI (read: accelerated computing) data center market. And for this, the investing world has become obsessed with HBM, or high-bandwidth memory.

Chip Stock Investor's end market sales chart for broad semiconductor categories.

But what is HBM? And what’s the best way to invest in it? To understand, let’s first start with the basics: What is memory (in layperson’s terms), and after that we’ll address what DRAM is.

Memory chips – just a commodity?

All on their own, memory chips aren’t so useful. They are a mere ingredient in a total computing system. As such, memory chips can be called a commodity. In this context, “commodity” is a reference to whether the manufactured good is the final product, or just one part of a final product. Thus, despite the incredible complexity in manufacturing these devices, memory chips are indeed a commodity – a part for a computing system, and a part that can be sourced from multiple suppliers (albeit from suppliers with close, but varying, technological capabilities).

A table showing the three basic parts of a computer: logic, networking, and data storage.

Memory chips can further be divided up into broad categories, based on intended use within a computing system (from PCs to smartphones and wearable devices to data centers):

A table showing three basic types of memory, like primary and secondary storage, volatile vs. non-volatile memory, and RAM vs. ROM.

Memory chips can also be divided up by product type as well, for example DRAM vs NAND vs hard-disk drives. For a full breakdown, see our video on memory chipmakers here: How to Invest in Semiconductor Stocks 2025  Memory Chip Manufacturers IDMs 

But for now, we’ll focus on DRAM, the basic building blocks (quite literally) of HBM used in many of the new AI applications under development.

DRAM (dynamic random-access memory)

DRAM is a type of volatile memory (see above definitions in the slide). DRAM chips are most frequently used for short-term data storage, like data held in queue for the processor (logic chip, like a CPU or GPU), and as storage for software command instructions. 

DRAM providers have generally consolidated into just a few large players, although a number of other diversified IDMs (like Infineon) provide some niche DRAM types. A few of China’s state-funded and backed IDMs, like CXMT (not publicly traded), have begun making inroads into the market as well.

The DRAM chip market is dominated by SK hynix, Micron, Samsung, CXMT and other upstarts in China, and other IDMs like Infineon.

Within DRAM, there are further specific types depending on application (like mobile versus enterprise server or data center). Each of these types of DRAM products are manufactured differently depending on memory performance needs of the application. But for our purposes here, we’ll stop and leave it at just DRAM as a general memory chip type.

HBM (high-bandwidth memory)

As alluded to before, HBM is an advanced type of DRAM product, made famous by the data center and accelerated computing boom for use in AI. Given the complexity and expense involved in producing these chips, HBM companies are even more concentrated than the more basic and broader DRAM product market, dominated by the most advanced players: SK hynix, Micron, and Samsung (the latter of which has had difficulties ramping up its HBM3 production). China’s IDMs, like CXMT, have also begun sampling the first couple generations of HBM chip technology as well. CXMT has been estimated to have an approximate 5% HBM market share as of 2025.

A table showing the even more concentrated HBM market, made up primarily of SK hynix, Samsung, and Micron.

Where did HBM come from in the first place? Originally, not a memory IDM at all. In the midst of the Great Financial Crisis of 2008-9, and its subsequent business model transformation from an IDM itself into an “asset light” semiconductor engineer, AMD began development of a new type of memory to help with its product performance scaling. Later, it got some help from supplier SK hynix on HBM. Brief timeline in the slide below.

A brief timeline showing AMD's start of HBM development in 2008, SK hynix's first production in 2013, and the first generation HBM being made as an industry standard by JEDEC in 2013.

AMD’s first widely available product featuring an HBM module was announced in 2015: https://ir.amd.com/news-events/press-releases/detail/619/amd-ushers-in-a-new-era-of-pc-gaming-with-radeontm-r9-and-r7-300-series-graphics-line-up-including-worlds-first-graphics-family-with-revolutionary-hbm-technology 

But why HBM at all? 

It’s about getting more data crammed into the memory chip, and then also getting the data moved to the logic chip (GPU) more efficiently. Originally used in an AMD graphics card, HBM is generally too expensive and power hungry for most consumer applications. But for data center AI, it shines. As you may know, new AI applications need a lot of information to be trained on and later for the AI’s operation (inference). HBM helps meet these data-intensive requirements. 

But even with HBM scaling, AI system performance (as demonstrated by Nvidia at GTC in March 2025) is increasing at a much faster pace. How can HBM keep up with the rapid scale-up of these systems from Nvidia (and AMD, and Broadcom, and others, too)? More on that topic another time.

In simple terms, HBM is made by stacking DRAM chips into a “cube,” connecting the stacked chips with vertical columns of copper (called through-silicon vias, or TSVs), and then co-packaging the HBM cubes close to the logic (usually a GPU) on what’s called an “interposer” (a piece of silicon substrate with data interconnects between the HBM and GPU). Below is a simple schematic of what this looks like from SK hynix and Micron:

A side-view schematic of an HBM cube packaged next to a logic chip from SK hynix.
A side-view schematic of an HBM cube packaged next to a logic chip from Micron.

SK hynix, Micron, and Samsung expect fast double-digit growth of HBM products in the next few years. Projections from the “big three” in HBM are that the AI memory product could be upwards of half of their DRAM revenue by the end of 2025 alone, compared to a negligible amount just a couple years ago. Up to this point in the post-bear market of 2022-23, HBM has been the primary driver of the memory chip IDM companies’ overall growth, with expectations implying a doubling in HBM revenue from 2024 to 2025.

We’ll have a deeper dive into this topic later this week ahead of Micron’s latest earnings update. Check out https://chipstockinvestor.com/semi-insider-sign-up/ to join us over on our Discord server.

For now, let’s get a bit more into the microscopic “nuts and bolts” of how HBM is made, which can help with deciding possible places to invest.

High-level view of HBM manufacturing

Below is another slightly more detailed (but still simplified) diagram of a logic chip (GPU) packaged with an HBM cube.

An even more detailed HBM schematic from SK hynix showing the DRAM stacks connected by TSVs, a logic base chip, interposer, and logic chip, all mounted on a substrate.

Note the co-packaging of the chips on a silicon “interposer,” a device often made by the likes of TSMC for its accelerated computing system designers (Nvidia, AMD). Additionally, TSMC has been getting more into the advanced packaging steps – represented by the solder balls, bumps, and micro bumps that mount the various chips to the interposer, and the interposer onto the substrate (and then eventually the computing system circuit board). See our blog article from last week for more on advanced packaging. https://chipstockinvestor.com/ai-makes-a-new-leader-in-fab-manufacturing-equipment-be-semi-besi-investor-day-update/ 

And then the TSVs that make the DRAM chips stackable into cubes. As we’ve also covered in numerous past videos, those manufacturing process steps are largely enabled by the “Fab 5” equipment providers – including Lam Research and Applied Materials.

DRAM stacking manufacturing techniques illustrated by Applied Materials.
DRAM stacking manufacturing techniques illustrated by Lam Research.

And as a final note, at the base of the HBM cube is a controller, or HBM interface. This is a type of special logic chip that helps with data clocking, timing, and routing as it moves out of memory and to the GPU for data crunching. The memory IDMs are doing a fair amount of the custom logic controller chip work themselves (Micron CEO Sanjay Mehrotra noted they are also internally manufacturing these custom base logic dies too). But we need not spend too much time here, as a lot of the design IP and patents (including the physical layer, or PHY, circuitry designs and layouts) are now owned by the EDA platform companies like Synopsys and Cadence Design Systems, as well as IP management and development specialists like Rambus.

EDA platform companies like Synopsys and Cadence own patents and IP for the HBM logic base die, as does IP developer Rambus.

We’ll delve more into this market later in the week after Micron earnings. And in the meantime, the discussion continues over on Semiconductor Insider.

Nicholas Rossolillo has been investing in individual stocks since 2005. He started a Registered Investment Advisor firm, Concinnus Financial in 2014 and was a contributor for The Motley Fool from 2015-2024.

sign up for our free newsletter

Nicholas Rossolillo has been investing in individual stocks since 2005. He started a Registered Investment Advisor firm, Concinnus Financial in 2014 and was a contributor for The Motley Fool from 2015-2024.

sign up for our free newsletter