Self-managed DRAM architecture to optimize capacity and energy efficiency효율적인 용량 확장 및 에너지 절감을 위한 자가 관리 DRAM 구조 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 465
  • Download : 0
Main memory is an essential component to maximize performance in computing environment such as data center, cloud system, and mobile devices. DRAMs are especially widely used as a main memory for consideration of its’ cost comparing to SRAMs and its’ performance comparing to HDDs or NAND flash memories. As demands for memory capacity grow, for programs such as big data analytics and in-memory databases, predicting the server peak memory usage became entangled. On the other hand, the average memory utilization is 50%, consuming unnecessary power from the unused server memories. Therefore deciding between expanding capacity and saving power becomes predicaments for server systems. In this dissertation, we propose self-managed memory mechanisms by utilizing compression components; if capacity is necessary, compression components expand capacity and if capacity expansion is not required the components reduces energy instead. This dissertation first proposes capacity expansion mechanism using memory compression. Unlike prior work which primarily focus on either capacities or decompression latencies, the proposed mechanism increases capacity without decreasing latency. However expanding capacity by compressing data, regardless of the needs for expanding capacity, induces energy consumption. Therefore, we propose memory energy saving mechanism by reducing the number of refresh operations. Refresh operations consume growing portions of DRAM power with increasing DRAM capacity in systems. To reduce the power consumption of such refresh operations, this dissertation proposes a novel value-aware refresh reduction technique exploiting the abundance of zero values in the memory contents. The proposed refresh architecture transforms the value and the mapping of DRAM data to increase consecutive zero values, and skips a refresh operation for a row containing zero values entirely. The mechanism converts memory blocks to base and delta values, inspired by a prior compression technique. Once values are converted, bits are transposed to place consecutive zeros at the refresh granularity. By reducing the number of refresh operations, we achieved both energy savings and performance gains
Advisors
Huh, Jaehyukresearcher허재혁researcher
Description
한국과학기술원 :전산학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학부, 2018.8,[vi, 76 p. :]

Keywords

Memory compression▼adual compression technique▼aOS transparency▼alocality awareness▼amemory refresh management▼avalue-based refresh; 메모리 압축▼a다중 메모리 압축 기술▼a메모리 접근 지역성▼a메모리 리프레시 관리▼a데이터 값 기반 리프레시

URI
http://hdl.handle.net/10203/265356
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=828233&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0