(A) low-latency multi-FPGA appliance for accelerating transformer-based text generation트랜스포머 기반의 자연어 생성을 위한 초고속 다중 필드 프로그래머블 게이트 어레이 가속기

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
Transformer is a deep learning language model widely used for natural language processing (NLP) services in datacenters. Among transformer models, Generative Pre-trained Transformer (GPT) has achieved remarkable performance in text generation, or natural language generation (NLG), which needs the processing of a large input context in the summarization stage, followed by the generation stage that produces a single word at a time. The conventional platforms such as GPU are specialized for the parallel processing of large inputs in the summarization stage, but their performance significantly degrades in the generation stage due to its sequential characteristic. Therefore, an efficient hardware platform is required to address the high latency caused by the sequential characteristic of text generation. In this paper, we present DFX, a multi-FPGA acceleration appliance that executes GPT-2 model inference end-to-end with low latency and high throughput in both summarization and generation stages. It uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among devices. Its compute cores operate on custom instructions and provide GPT-2 operations end-to-end. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. DFX achieves 5.58x speedup and 3.99x energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21x more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.
Advisors
김주영researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2023.8,[iv, 39 p. :]

Keywords

머신 러닝▼a자연어 처리▼a생성형 사전학습 트랜스포머▼a다중 필드 프로그래머블 게이트 어레이 시스템▼a추론용 하드웨어 가속기; Machine learning▼aNatural language processing▼aGenerative pre-trained transformer▼aMulti-FPGA system▼aHardware inference accelerator

URI
http://hdl.handle.net/10203/320703
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045934&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0