Redesigning hardware and software stacks for terabyte-scale memory systems테라바이트급 메모리 시스템 구축을 위한 하드웨어 및 소프트웨어 재설계 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 129
  • Download : 0
The emergence of large-scale machine learning models and recommendation systems motivates the need for greater memory capacity. However, DRAM scaling is not keeping pace with the increasing demand for memory capacity. As a result, designing a memory system that overcomes DRAM's capacity limit has become a critical problem. This dissertation defines such a memory system as a terabyte-scale memory system and proposes software and hardware solutions to build a secure and efficient terabyte-scale memory system. More specifically, this dissertation focuses on the performance, memory utilization, and security of terabyte-scale memory systems. The first chapter of this dissertation concentrates on page migration policies in tiered memory systems. A tiered memory system is a memory system where multiple types of memories coexist. A tiered memory system can be composed of DRAMs and NVMs. As NVMs offer a larger memory capacity with high-density memory cells, a tiered memory system can overcome the capacity limit of DRAMs. However, NVMs suffer from longer access latency compared to DRAMs. A page migration policy can mitigate the adverse performance effect of NVMs by migrating performance-critical pages to DRAMs. There are various page migration policies depending on how to identify the performance importance of a page, and this study finds that workloads have diverse preferences on the policies. The reason behind the preferences is analyzed, which is the memory access patterns of workloads. At last, an adaptive page migration policy is proposed, which selects a policy based on the features that represent the preferences on policies. The second chapter reduces the wasted memory in a transparent memory compression architecture where memory compression is done by its memory controller. The memory has a hardware address space in addition to the physical address space. An operating system transparently stores data in the physical address space, and the memory controller compresses the data in the physical address space at the unit of blocks and stores the compressed data in the hardware address space. This study finds that such block-unit memory compression and mappings result in wasted memory from internal fragmentation and metadata overhead. This study proposes \lowmeta, which reduces the wasted memory with a novel data layout that limits the addressable range of translation entries. The third chapter proposes a secure memory disaggregation engine that enhances the security of disaggregated memory systems. Memory disaggregation has been studied for decades to expand memory capacity. While memory disaggregation increases the available memory capacity to a node, it has a critical limitation that a node has to trust all participating nodes, which is a strong assumption. When the assumption does not hold, a security risk can be propagated from one node to the other nodes, jeopardizing the whole system. This study finds that having a large trusted domain is the root cause and proposes to narrow down the trusted domain into a set of hardware. This chapter proposes a secure memory disaggregation engine, which is built on secure FPGAs.
Advisors
Huh, Jaehyukresearcher허재혁researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학부, 2022.2,[vi, 68 p. :]

URI
http://hdl.handle.net/10203/309275
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=996361&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0