DRAM-based In-memory-computing for high-density and high energy-efficiency AI accelerator고집적도 및 고효율의 인공지능 가속기를 위한 DRAM 기반 인-메모리 컴퓨팅

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 1
  • Download : 0
This thesis covers research on DRAM-based in-memory computing (IMC) to achieve higher density and efficiency in artificial intelligence (AI) accelerators. Recently, IMC technology has been utilized to achieve higher energy efficiency and throughput on AI beyond digital implementation. However, most previous research uses SRAM-based IMC, which limits the density. We propose a DRAM-based IMC method for two AI accelerator chips to achieve higher density and efficiency than existing digital accelerators and SRAM-IMC. First, DynaPlasia proposes a new solution at the memory cell, cell array, and architecture level. First, at the memory cell level, the impact of leakage current is reduced with a new computation method to improve efficiency and accuracy. Additionally, at the cell array level, a numerical expression method reducing the computing logic switching and a hierarchical in-memory analog-to-digital converter (ADC) further improve the computation efficiency. Lastly, at the architectural level, the processor is dynamically reconfigured to operate with higher efficiency and throughput without wasting resources in various structures of actual AI tasks. In addition, the second chip, Scaling-CIM, proposes a cell array and algorithm-level optimization in addition to DynaPlasia's cell to reduce the analog-to-digital conversion burden. First, we propose a method to reduce the number of bits and operations of the ADC for analog computing by utilizing the characteristics of the partial sum distribution in multi-bit operations. To this end, we propose a hardware structure for cell array that can adjust the conversion scale. Also, at the algorithm level, we propose a method of controlling the conversion scale according to the characteristics of each layer in the AI model. Accordingly, the proposed two AI accelerators with DRAM-based IMC achieved higher throughput and energy efficiency than existing artificial intelligence accelerators in actual deep neural network benchmarks.
Advisors
유회준researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2024
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[v, 119 p. :]

Keywords

인-메모리 컴퓨팅▼a프로세싱-인-메모리▼aDRAM▼a인공지능 가속기▼a심층신경망; In-memory computing▼aProcessing-in-memory▼aDRAM▼aAI accelerator▼aNeural network

URI
http://hdl.handle.net/10203/322168
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100074&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0