DRAM-based In-memory-computing for high-density and high energy-efficiency AI accelerator고집적도 및 고효율의 인공지능 가속기를 위한 DRAM 기반 인-메모리 컴퓨팅

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 2
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor유회준-
dc.contributor.authorKim, Sangjin-
dc.contributor.author김상진-
dc.date.accessioned2024-08-08T19:31:39Z-
dc.date.available2024-08-08T19:31:39Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100074&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/322168-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[v, 119 p. :]-
dc.description.abstractThis thesis covers research on DRAM-based in-memory computing (IMC) to achieve higher density and efficiency in artificial intelligence (AI) accelerators. Recently, IMC technology has been utilized to achieve higher energy efficiency and throughput on AI beyond digital implementation. However, most previous research uses SRAM-based IMC, which limits the density. We propose a DRAM-based IMC method for two AI accelerator chips to achieve higher density and efficiency than existing digital accelerators and SRAM-IMC. First, DynaPlasia proposes a new solution at the memory cell, cell array, and architecture level. First, at the memory cell level, the impact of leakage current is reduced with a new computation method to improve efficiency and accuracy. Additionally, at the cell array level, a numerical expression method reducing the computing logic switching and a hierarchical in-memory analog-to-digital converter (ADC) further improve the computation efficiency. Lastly, at the architectural level, the processor is dynamically reconfigured to operate with higher efficiency and throughput without wasting resources in various structures of actual AI tasks. In addition, the second chip, Scaling-CIM, proposes a cell array and algorithm-level optimization in addition to DynaPlasia's cell to reduce the analog-to-digital conversion burden. First, we propose a method to reduce the number of bits and operations of the ADC for analog computing by utilizing the characteristics of the partial sum distribution in multi-bit operations. To this end, we propose a hardware structure for cell array that can adjust the conversion scale. Also, at the algorithm level, we propose a method of controlling the conversion scale according to the characteristics of each layer in the AI model. Accordingly, the proposed two AI accelerators with DRAM-based IMC achieved higher throughput and energy efficiency than existing artificial intelligence accelerators in actual deep neural network benchmarks.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject인-메모리 컴퓨팅▼a프로세싱-인-메모리▼aDRAM▼a인공지능 가속기▼a심층신경망-
dc.subjectIn-memory computing▼aProcessing-in-memory▼aDRAM▼aAI accelerator▼aNeural network-
dc.titleDRAM-based In-memory-computing for high-density and high energy-efficiency AI accelerator-
dc.title.alternative고집적도 및 고효율의 인공지능 가속기를 위한 DRAM 기반 인-메모리 컴퓨팅-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorYoo, Hoi-Jun-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0