Beyond the Memory Wall: A Case for Memory-centric HPC System for Deep Learning

Cited 37 time in webofscience Cited 0 time in scopus
  • Hit : 187
  • Download : 0
As the models and the datasets to train deep learning (DL) models scale, system architects are faced with new challenges, one of which is the memory capacity bottleneck, where the limited physical memory inside the accelerator device constrains the algorithm that can be studied. We propose a memory-centric deep learning system that can transparently expand the memory capacity available to the accelerators while also providing fast inter-device communication for parallel training. Our proposal aggregates a pool of memory modules locally within the device side interconnect, which are decoupled from the host interface and function as a vehicle for transparent memory capacity expansion. Compared to conventional systems, our proposal achieves an average 2.8x speedup on eight DL applications and increases the system-wide memory capacity to tens of TBs.
Publisher
IEEE Computer Society
Issue Date
2018-10-22
Language
English
Citation

51st Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2018, pp.148 - 161

DOI
10.1109/MICRO.2018.00021
URI
http://hdl.handle.net/10203/247305
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 37 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0