T-PIM: A 2.21-to-161.08TOPS/W Processing-In-Memory Accelerator for End-to-End On-Device Training

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 408
  • Download : 0
As the number of edge devices grows to tens of billions, the importance of intelligent computing has been shifted from cloud datacenters to edge devices. On-device training, which enables the personalization of a machine learning (ML) model for each user, is crucial in the success of edge intelligence. However, battery-powered edge devices cannot afford huge computations and memory accesses involved in the training. Processing-in-Memory (PIM) is a promising technology to overcome the memory bandwidth and energy problem by combining processing logic into the memory. Many PIM chips [1]-[5] have accelerated ML inference using analog or digital-based logic with sparsity handling. Two-way transpose PIM [6] supports backpropagation, but it lacks gradient calculation and weight update, required for end-to-end ML training. © 2022 IEEE.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2022-04
Language
English
Citation

43rd Annual IEEE Custom Integrated Circuits Conference, CICC 2022

ISSN
0886-5930
DOI
10.1109/CICC53496.2022.9772808
URI
http://hdl.handle.net/10203/299749
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0