Hardware and algorithm co-optimization for graph neural network acceleration그래프 신경망 가속을 위한 하드웨어 및 알고리즘 최적화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 7
  • Download : 0
As Graph Neural Networks (GNNs) have emerged as a mainstream algorithm in many research areas, designing a GNN accelerator becomes a new challenge. Compared to previous DNNs, GNNs include characteristics of both graph processing and neural networks. In this paper, we analyze the workload of GNNs executed on existing hardware platforms and identify the intensive DRAM access as the main bottleneck for accelerating GNNs. Focusing on this problem, we propose a series of schemes, both hardware and algorithm side, to reduce DRAM access. Our work achieves on average 3.67x, 5.36x, 3.49x speedup than the GPU on GCN, SAGE, and GAT.
Advisors
김이섭researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2021
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[vi, 33 p. :]

Keywords

그래프 신경망▼a하드웨어 가속기▼a인공지능 가속기 디자인▼a하드웨어 및 알고리즘 공동 최적화▼a병렬 처리; Graph Neural Network▼aHardware accelerator▼aAI accelerator design▼aHardware and Algorithm co-optimization▼aParallel processing

URI
http://hdl.handle.net/10203/320449
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1044994&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0