https://tinyurl.com/ywfs2xpy
三星突破記憶體瓶頸 公布HBM-PIM、LPDDR-PIM研究成果
https://tinyurl.com/ymux929s
Samsung breaks through memory bottleneck; announces research results on
HBM-PIM and LPDDR-PIM
At the 2023 Hot Chips forum, in addition to Intel's announcement of its data
center chip product, the latest report from Korea's TheElec also pointed out
Samsung Electronics has announced its research results on high bandwidth
memory (HBM)-processing-in-memory (PIM) and low power DDR (LPDDR)-PIM as part
of its efforts to focus on the AI sector.
三星公布了高頻寬記憶體內運算(HBM-PIM)和低功耗動態記憶體內運算的成果
Previously, Samsung and AMD began a collaboration related to PIM technology.
Samsung equipped HBM-PIM memory onto AMD's commercial GPU accelerator card,
the MI-100. According to Samsung's research results, applying HBM-PIM to
generative AI will more than double the accelerator's performance and power
consumption efficiency compared to existing HBMs.
之前三星和AMD開始在記憶體內運算(PIM)合作,三星把HBM-PIM整合到AMD
GPU MI-100中。根據三星的研究,HBM-PIM在同樣的攻耗下,運算效能可以提升一倍
To solve the memory bottleneck that has begun to appear in the AI
semiconductor sector in recent years, next-gen memory technologies like
HBM-PIM have received significant attention. The HBM-PIM conducts computation
processing within the memory itself through PIM technology. This simplifies
the data movement steps, thereby enhancing performance and power efficiency.
為了解決AI運算中的記憶體瓶頸,記憶體內運算受到很大的注目,HBM-PIM是直接在
記憶體內執行運算,這樣簡化了數據傳輸的步驟,因此增加性能和功耗
Furthermore, to verify the Mixture of Experts (MoE) model, Samsung used 96
HBM-PIM-equipped MI-100 units to build a HBM-PIM cluster. In the MoE model,
the HBM-PIM accelerator doubled the performance and tripled the power
efficiency compared to HBM.
在MoE模型中,三星的HBM-PIM展現了2倍性能以及提升了三倍的功耗比
Industry sources explained that the speed of memory development has been
slower compared to the advancements in AI accelerator technology. To
alleviate this memory bottleneck, it's necessary to expand the application of
next-gen semiconductors like HBM-PIM. Additionally, in sectors like LLM, many
data sets are frequently reused. Therefore, utilizing HBM-PIM computation can
also reduce data movement.
記憶體速度的增速遠低於AI加速的進展,為了抒解記憶體瓶頸,HBM-PIM的使用是必要的
此外,對於AI模型LLM來說,很多的數據都會重複的使用,所以使用HBM-PIM可以大幅
的減少數據移動
On the other hand, Samsung also introduced the "LPDDR-PIM," which combines
mobile DRAM with PIM to enable direct processing and computing within edge
devices. Notably, because LPDDR-PIM is designed for edge devices, it offers
lower bandwidth (102.4GB/s) and saves 72% of power compared to DRAMs.
另外對於邊緣運算裝置,三星結合了手機記憶體在PIM中,創造了可以在邊緣運算
裝置中使用的LPDDR-PIM,雖然頻寬比較小,但能耗也降低了72%
Previously, Samsung revealed its AI memory plans during its 2Q23 earnings
call. It not only mentioned that the HBM3 supply was undergoing customer
verification but also stated that it's actively developing new edge AI memory
products and PIM technology. Looking ahead, both HBM-PIM and LPDDR-PIM are
still some time away from commercialization. Compared to existing HBMs, PIMs
are quite expensive.
之前,三星說它的HBM3在接受客戶的驗證,而且它也在開發新的邊緣運算的AI記憶體
產品和PIM技術。目前,HBM-PIM和LPDDR-PIM離商業化還有段距離,主要是PIM還是
非常的貴
The Hot Chips forum is a prominent academic event in the semiconductor
industry. It's typically held in late August. Apart from Samsung, other major
companies like SK Hynix, Intel, AMD, and Nvidia also participated in this
event.
心得:
如果PIM做出來,就完全不需要先進封裝了...
--