Industry’s fastest, highest-capacity HBM to advance generative AI innovation





生成的人工智能 opens a world for new forms of creativity and expression, 就像上面的图片, by using large language model (LLM) for training and inference. Utilization of compute and memory resources make the difference in time to deploy and response time. Micron HBM3E provides higher memory capacity that improves performance and reduces CPU offload for faster training and more responsive queries when inferencing LLMs such as ChatGPT.



AI unlocks new possibilities for businesses, IT, engineering, science, medicine and more. As larger AI models are deployed to accelerate deep learning, maintaining compute and memory efficiency is important to address performance, 确保所有人受益的成本和权力. Micron HBM3E improves memory performance while focusing on energy efficiency that increases performance per watt resulting in lower time to train LLMs such as GPT-4 and beyond.



科学家们, 研究人员, and engineers are challenged to discover solutions for climate modeling, curing cancer and renewable and sustainable energy resources. 高性能计算 (HPC) propels time to discovery by executing very complex algorithms and advanced simulations that use large datasets. Micron HBM3E provides higher memory capacity and improves performance by reducing the need to distribute data across multiple nodes, 加快创新步伐.

1 Data rate testing estimates based on shmoo plot of pin speed performed in manufacturing test environment
2 相同堆叠高度的容量增加50%
3 Power and performance estimates based on simulation results of workload uses cases

4 基于Micron内部模型的参考 ACM出版,与当前的运输平台(H100)相比
5 Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, 2月27日, 2023,与当前的运输平台(H100)相比
6 Based on system measurements using commercially available H100 platform and linear extrapolation


Micron delivers industry's fastest, highest capacity HBM3E to advance generative AI innovations

1.2tb /s带宽, 8-high 24GB HBM3E from Micron delivers superior power efficiency enabled by advanced 1β process node.

Read HBM3E press release >


Micron's Girish Cherussery, Sr. Director, 高性能存储器, sits down with Patrick Moorhead and Danial Newman from Six Five to discuss High Bandwidth Memory (HBM) and Micron's newest HBM3E product.

View video on HBM3E >


We are in the dawn of the era of artificial intelligence (AI), where AI is expected to be a central part of our everyday lives. 这是由计算和存储技术的进步推动的. High bandwidth memory (HBM) is at the forefront of AI innovations.
Read HBM3E 沙巴体育结算平台简短 >

1β DRAM技术

Micron is shipping the industry’s first DRAM manufactured on next-generation 1-beta process technology. It represents state-of-the-art innovation from Micron’s continued investment in R&D、工艺技术进步. Micron’s 1-beta process technology allows development of memory products with increased performance, 更大的容量, 更高的密度, and lower relative power consumption than prior generations.

了解更多 >


没有区别. 这只是一个名字的改变.
Micron’s HBM3E delivers an industry leading pin speed of > 9.2Gbps and can support backward compatible data rates of HBM2 first generation devices.
Micron’s HBM3E delivers an industry leading Bandwidth of >1.每个位置2tb /s. HBM3E has 1024 IO pins and Micron’s pin speed > 9.2Gbps achieves > 1.2TB/s​.
Micron’s industry leading HBM3E provides 24GB capacity per placement with an 8-high HBM3E, Micron plans to announce a 36GB 12-high HBM3E device in the future.​ ​
Micron’s HBM3E delivers an industry leading Bandwidth of >1.每个位置2tb /s. HBM3E has 1024 IO pins and Micron’s pin speed > 9.2Gbps achieves > 1.2TB/s.​
HBM2提供8个独立的通道,运行在3点钟.每个引脚6Gbps,提供高达410GB/s的带宽. HBM2提供4GB、8GB和16GB的容量. HBM3E offers 16 independent channels and 32 psuedo channels. Micron’s HBM3E delivers pin speed > 9.2Gbps at an industry leading Bandwidth of >1.每个位置2tb /s. 美光的HBM3E提供24GB内存,使用8-high的堆栈, 并计划在未来使用12高堆栈的36GB.
请参阅我们的 沙巴体育结算平台简短.