Energy-Efficient DNN Training Processors on Micro-AI Systems

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 155
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorHan, Donghyeonko
dc.contributor.authorKang, Sanghoonko
dc.contributor.authorKim, Sangyeobko
dc.contributor.authorLee, Juhyoungko
dc.contributor.authorYoo, Hoi-Junko
dc.date.accessioned2023-01-09T05:00:28Z-
dc.date.available2023-01-09T05:00:28Z-
dc.date.created2023-01-09-
dc.date.issued2022-11-
dc.identifier.citationIEEE Open Journal of the Solid-State Circuits Society, v.2, pp.259 - 275-
dc.identifier.urihttp://hdl.handle.net/10203/304143-
dc.description.abstractMany edge/mobile devices are now able to utilize deep neural networks (DNNs) thanks to the development of mobile DNN accelerators. Mobile DNN accelerators overcame the problems of limited computing resources and battery capacity by realizing energy-efficient inference. However, its passive behavior makes it difficult for DNN to provide active customization for individual users or its service environment. The importance of on-chip training is rising more and more to provide active interaction between DNN processors and ever-changing surroundings or conditions. Despite its advantages, the DNN training has more constraints than the inference such that it was considered impractical to be realized on mobile/edge devices. Recently, there are many trials to realize mobile DNN training, and a number of prior works will be summarized. First, it arranges the new challenges of the DNN accelerator induced by training functionality and discusses new hardware features related to the challenges. Second, it explains algorithm-hardware co-optimization methods and explains why it becomes mainstream in mobile DNN training research. Third, it compares the main differences between the conventional inference accelerators and recent training processors. Finally, the conclusion is made by proposing the future directions of the DNN training processor in micro-AI systems.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.titleEnergy-Efficient DNN Training Processors on Micro-AI Systems-
dc.typeArticle-
dc.type.rimsART-
dc.citation.volume2-
dc.citation.beginningpage259-
dc.citation.endingpage275-
dc.citation.publicationnameIEEE Open Journal of the Solid-State Circuits Society-
dc.identifier.doi10.1109/OJSSCS.2022.3219034-
dc.contributor.localauthorYoo, Hoi-Jun-
dc.contributor.nonIdAuthorKang, Sanghoon-
dc.description.isOpenAccessN-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0