A Framework for Accelerating Transformer-based Language Model on ReRAM-based Architecture

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 153
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKang, Myeongguko
dc.contributor.authorShin, Hyeinko
dc.contributor.authorKim, Lee-Supko
dc.date.accessioned2022-09-06T05:01:03Z-
dc.date.available2022-09-06T05:01:03Z-
dc.date.created2021-11-25-
dc.date.created2021-11-25-
dc.date.issued2022-09-
dc.identifier.citationIEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, v.41, no.9, pp.3026 - 3039-
dc.identifier.issn0278-0070-
dc.identifier.urihttp://hdl.handle.net/10203/298378-
dc.description.abstractTransformer-based language models have become the de-facto standard model for various NLP applications given the superior algorithmic performances. Processing a transformer-based language model on a conventional accelerator induces the memory wall problem, and the ReRAM-based accelerator is a promising solution to this problem. However, due to the characteristics of the self-attention mechanism and the ReRAM-based accelerator, the pipeline hazard arises when processing the transformer-based language model on the ReRAM-based accelerator. This hazard issue greatly increases the overall execution time. In this paper, we propose a framework to resolve the hazard issue. Firstly, we propose the concept of window self-attention to reduce the attention computation scope by analyzing the properties of the self-attention mechanism. After that, we present a window-size search algorithm, which finds an optimal window size set according to the target application/algorithmic performance. We also suggest a hardware design that exploits the advantages of the proposed algorithm optimization on the general ReRAM-based accelerator. The proposed work successfully alleviates the hazard issue while maintaining the algorithmic performance, leading to a 5.8× speedup over the provisioned baseline. It also delivers up to 39.2×/643.2× speedup/higher energy efficiency over GPU, respectively.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleA Framework for Accelerating Transformer-based Language Model on ReRAM-based Architecture-
dc.typeArticle-
dc.identifier.wosid000842062100023-
dc.identifier.scopusid2-s2.0-85118231815-
dc.type.rimsART-
dc.citation.volume41-
dc.citation.issue9-
dc.citation.beginningpage3026-
dc.citation.endingpage3039-
dc.citation.publicationnameIEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS-
dc.identifier.doi10.1109/TCAD.2021.3121264-
dc.contributor.localauthorKim, Lee-Sup-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorBERT-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorReRAM-based accelerator-
dc.subject.keywordAuthorself-attention-
dc.subject.keywordAuthortransformer-based language model-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0