Login / Register
Advancing the rate of AI innovation
HBM3E专为人工智能和超级计算打造,工艺技术行业领先
Frequently asked questions
美光的HBM3E 8-high 24GB和HBM3E 12-high 36GB提供业界领先的性能,带宽大于1.2 TB/s and consume 30% less power than any other competitor in the market.
美光HBM3E 8-high 24GB将于2024年第二季度开始在NVIDIA H200 Tensor Core gpu中发货. Micron HBM3E 12-high 36GB samples are available now.
美光的HBM3E 8-high和12-high模块提供业界领先的引脚速度大于9.2Gbps,可支持HBM2第一代设备的向后兼容数据速率.
美光的HBM3E 8-high和12-high解决方案提供业界领先的带宽超过1倍.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of greater than 9.2Gbps achieves a rate higher than 1.2TB/s.
Micron’s industry-leading HBM3E 8-high provides 24GB capacity per placement. 最近发布的美光HBM3E 12高立方体将提供令人瞠目的36GB容量.
美光的HBM3E 8-high和12-high解决方案提供业界领先的带宽大于1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of more than 9.2Gbps and achieves a rate greater than 1.2TB/s.
HBM2 offers 8 independent channels running at 3.每个引脚6Gbps,提供高达410GB/s的带宽,有4GB, 8GB和16GB容量. HBM3E offers 16 independent channels and 32 pseudo channels. Micron’s HBM3E delivers pin speed greater than 9.2Gbps at an industry-leading bandwidth of more than 1.2 TB/s per placement. 美光的HBM3E提供了24GB的容量,使用了8高的堆栈,使用了12高的堆栈,提供了36GB的容量. Micron’s HBM3E delivers 30% lower power consumption than competitors.
Please see our Product Brief.
Featured resources
1. 在制造测试环境下,基于引脚速度shmoo图的数据速率测试估计.
2. 50% more capacity for same stack height.
3. 基于工作负载用例的模拟结果的功率和性能估计.
4. Based on internal Micron model referencing an ACM Publication, as compared to the current shipping platform (H100).
5. Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, February 27, 2023, as compared to the current shipping platform (H100).
6. 基于商用H100平台和线性外推的系统测量.