Free Shipping on orders over US$39.99 How to make these links

Micron’s fastest memory for modern data centers-HBM2E

Data centers are constantly evolving to solve the challenges of storing, moving, and analyzing data quickly and efficiently. To a large extent, its continuous development is driven by the four high-performance application trends shown below.

Author: Bill Randolph, Micron Global Video Memory Business Director

Micron’s fastest memory for modern data centers-HBM2E

Data centers are constantly evolving to solve the challenges of storing, moving, and analyzing data quickly and efficiently. To a large extent, its continuous development is driven by the four high-performance application trends shown below.

Traditional games and professional vision applications are mainly concentrated in the field of personal computers, using fast GDDR memory. But with the advent of artificial intelligence (AI) training and inference and high-performance computing, data centers are increasingly using the fastest memory, namely high-bandwidth memory (HBM). Application architects in these areas need to find the highest possible bandwidth.

Let’s take a moment to think about the reasons. The reason is data, massive data!

Micron’s fastest memory for modern data centers-HBM2E

Data is accelerating. According to the IDC “Global Datasphere 2021” report, the total amount of global data in 2018 was 36 zettabytes, it will increase to 64 zettabytes in 2020, and it will continue to grow to 146 zettabytes in 2024.

AI, deep learning, and machine learning are inseparable from massive amounts of data. In terms of achieving intelligence, AI is a game changer and can bring new insights beyond human-led discovery. In the past, we wrote programs for computing infrastructure, but now we can use massive amounts of data to train them to serve a wide range of fields such as smart agriculture, personalized education, and precision medicine.

Micron provides ultra-bandwidth solutions such as GDDR and HBM to empower analysis engines to help train the next wave of AI algorithms. For example, an AI model with more than 1.3 billion parameters cannot fit into a single GPU (even with 32GB of memory). Therefore, expanding the memory capacity allows larger models/more parameters to stay closer to the core calculations. By increasing bandwidth and capacity, and reducing latency caused by memory and storage solutions, we are accelerating the realization of insights and helping customers gain a greater competitive advantage.

Multi-component heterogeneous method based on accelerator

In order to cope with the growth of data-intensive workloads and applications, as well as evolving application scenarios and new business opportunities, data center infrastructure is being redefined. Data centers are traditionally CPU-centric, processing data by using memory (such as DDR4) and storage (such as SSD). With the emergence of modern application scenarios and workloads such as AI, this traditional architecture has become a bottleneck that prevents the CPU from running at the required high-performance speed.

In order to solve the problem of this increasing performance gap, the current method is to use dedicated hardware to liberate certain functions of the CPU. This new accelerator-based computing model is gradually becoming the key to the development of heterogeneous data centers. The new architecture of modern data centers deploys various components that focus on providing specific functions or processing different types and formats of data, thereby significantly improving the speed and performance of the entire system.

For those traditional CPUs that use DDR4 (and soon DDR5) computing memory, we have added GPUs for acceleration, as well as FPGAs and ASICs for other functions. Modern data centers can use these different computing functions, and the use of different types of memory is also indispensable to meet the high performance required to handle various workloads.

This is the reason why HBM appeared-it makes performance even higher. Micron HBM2E is the latest generation of HBM and our fastest DRAM, allowing us to bring better insights to our customers.

Micron’s fastest memory for modern data centers-HBM2E

HBM2E: the star of ultra-bandwidth solutions

Micron leads the market with continuously evolving memory solutions and changes the world by meeting the changing needs of emerging market applications.

HBM2E uses Through Silicon Via (TSV) channels to build vertically stacked DRAM. (For more information, please refer to Micron’s technical briefing “Integrating and Using HBM2E Memory.”) Micron has been developing stacked DRAM for 20 years and has obtained thousands of patents in the process. (Akshay Singh, head of Micron’s R&D director,’s article “Stacked Silicon Miracle” has a more detailed explanation.) In future stacked DRAM innovation research, we plan to develop new products to meet the high performance and high performance of more data-intensive workloads. Low power consumption requirements.

Micron’s fastest memory for modern data centers-HBM2E
Micron uses vertical stack DRAM for HBM2E and connects the layers through through silicon via (TSV) channels

The architecture of high-bandwidth memory is expected to meet the industry’s requirements for bandwidth, power consumption, and form factor. It is now the AI ​​industry standard memory solution, widely used in data centers. HBM2E is the third standard of the HBM series: HBM1, HBM2 and now HBM2E. HBM2E provides a very wide multi-channel I/O-that is, 1,024 bits wide, and has a very short physical channel. The key is that it achieves a very high memory density in a very small size.

HBM2E is located on the interposer very close to the GPU or CPU, and is usually placed in the same package or heat dissipation chassis. HBM2E has a very wide I/O bus and higher density, which provides the required high performance and high energy efficiency for the accelerator-based computing model of modern data centers.

For an item-by-item comparison between the data center accelerator memory product line and Micron’s high-performance memory, please refer to Table 1 in the white paper “Requirements for Ultra-Bandwidth Solutions.” HBM2E’s stronger I/O performance, bandwidth and energy efficiency make it the cornerstone of Micron’s ultra-bandwidth solution product line.


As high-performance application demands drive the development of next-generation system architectures and heterogeneous data centers, Micron’s HBM2E memory and ultra-bandwidth solutions provide critical memory and advanced system performance to help transform data into insights. For more information, please click

Micron’s fastest memory for modern data centers-HBM2E
Bill Randolph

Bill Randolph is currently the director of Micron’s global video memory business and is responsible for business development. His responsibilities include the development of Micron’s partner business, with a focus on Micron’s high-speed memory solution product line for the game console and high-end graphics market.

Bill has worked at Micron for more than 10 years and has held various positions in the Ecosystem Development Department and the Consumer/Graphics Business Development Department. He entered the DRAM industry in 1990 and previously worked in design, development center management and sales at Mitsubishi and Qimonda. Bill holds a bachelor’s degree in electrical engineering from Florida State University, and a master’s degree in electrical engineering from Georgia Tech.

The Links:   DMF5010NB-FW NL160120BC27-09

We will be happy to hear your thoughts

Leave a reply

House idea
Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart