Hardware and algorithm co-optimization for graph neural network acceleration그래프 신경망 가속을 위한 하드웨어 및 알고리즘 최적화

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 6
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김이섭-
dc.contributor.authorHan, Yunki-
dc.contributor.author한윤기-
dc.date.accessioned2024-07-25T19:30:27Z-
dc.date.available2024-07-25T19:30:27Z-
dc.date.issued2021-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1044994&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320449-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2021.2,[vi, 33 p. :]-
dc.description.abstractAs Graph Neural Networks (GNNs) have emerged as a mainstream algorithm in many research areas, designing a GNN accelerator becomes a new challenge. Compared to previous DNNs, GNNs include characteristics of both graph processing and neural networks. In this paper, we analyze the workload of GNNs executed on existing hardware platforms and identify the intensive DRAM access as the main bottleneck for accelerating GNNs. Focusing on this problem, we propose a series of schemes, both hardware and algorithm side, to reduce DRAM access. Our work achieves on average 3.67x, 5.36x, 3.49x speedup than the GPU on GCN, SAGE, and GAT.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject그래프 신경망▼a하드웨어 가속기▼a인공지능 가속기 디자인▼a하드웨어 및 알고리즘 공동 최적화▼a병렬 처리-
dc.subjectGraph Neural Network▼aHardware accelerator▼aAI accelerator design▼aHardware and Algorithm co-optimization▼aParallel processing-
dc.titleHardware and algorithm co-optimization for graph neural network acceleration-
dc.title.alternative그래프 신경망 가속을 위한 하드웨어 및 알고리즘 최적화-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorKim, Lee-Sup-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0