DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kang, Myeonggu | ko |
dc.contributor.author | Shin, Hyein | ko |
dc.contributor.author | Kim, Lee-Sup | ko |
dc.date.accessioned | 2022-09-06T05:01:03Z | - |
dc.date.available | 2022-09-06T05:01:03Z | - |
dc.date.created | 2021-11-25 | - |
dc.date.created | 2021-11-25 | - |
dc.date.issued | 2022-09 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, v.41, no.9, pp.3026 - 3039 | - |
dc.identifier.issn | 0278-0070 | - |
dc.identifier.uri | http://hdl.handle.net/10203/298378 | - |
dc.description.abstract | Transformer-based language models have become the de-facto standard model for various NLP applications given the superior algorithmic performances. Processing a transformer-based language model on a conventional accelerator induces the memory wall problem, and the ReRAM-based accelerator is a promising solution to this problem. However, due to the characteristics of the self-attention mechanism and the ReRAM-based accelerator, the pipeline hazard arises when processing the transformer-based language model on the ReRAM-based accelerator. This hazard issue greatly increases the overall execution time. In this paper, we propose a framework to resolve the hazard issue. Firstly, we propose the concept of window self-attention to reduce the attention computation scope by analyzing the properties of the self-attention mechanism. After that, we present a window-size search algorithm, which finds an optimal window size set according to the target application/algorithmic performance. We also suggest a hardware design that exploits the advantages of the proposed algorithm optimization on the general ReRAM-based accelerator. The proposed work successfully alleviates the hazard issue while maintaining the algorithmic performance, leading to a 5.8× speedup over the provisioned baseline. It also delivers up to 39.2×/643.2× speedup/higher energy efficiency over GPU, respectively. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | A Framework for Accelerating Transformer-based Language Model on ReRAM-based Architecture | - |
dc.type | Article | - |
dc.identifier.wosid | 000842062100023 | - |
dc.identifier.scopusid | 2-s2.0-85118231815 | - |
dc.type.rims | ART | - |
dc.citation.volume | 41 | - |
dc.citation.issue | 9 | - |
dc.citation.beginningpage | 3026 | - |
dc.citation.endingpage | 3039 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS | - |
dc.identifier.doi | 10.1109/TCAD.2021.3121264 | - |
dc.contributor.localauthor | Kim, Lee-Sup | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | BERT | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | ReRAM-based accelerator | - |
dc.subject.keywordAuthor | self-attention | - |
dc.subject.keywordAuthor | transformer-based language model | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.