Efficient low-power cache architectures for embedded systems임베디드 시스템을 위한 효율적인 저전력 온-칩 캐시 설계에 관한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 397
  • Download : 0
In order to tackle the memory wall problem, on-chip cache memories play an important role in resource-constrained embedded systems by filtering out most off-chip memory accesses. However, they consume a large fraction of the processor area along with up to 45% of processor power. Because of continuous scaling of the silicon process technology, on-chip cache sizes are growing and they will consume more energy. To solve this increasingly important problem, this dissertation studies low-power cache architecture designs that can have a significant impact on the overall energy consumption of embedded processors. The first cache design consideration focuses on the L0 data cache to reduce cache miss costs due to its small capacity. We introduce the filter data cache that is added as a L0 data cache into cache memory hierarchy. The filter data cache enhances three aspects of cache operations: cache miss prediction for bypass, selective cache block allocations, and eliminating tag comparison for write-back operations. If a memory request is predicted to miss in the filter data cache, the filter data cache is bypassed and the L1 data cache is accessed directly. Data read from the L1 data cache is not allocated to the filter data cache whenever beneficial. Write back energy to the L1 data cache is reduced by eliminating tag comparisons through storing way numbers of the L1 data cache in the filter data cache. We demonstrate that the filter data cache significantly reduces energy consumption of data caches compared with competitive L0 caches. The overheads in terms of area and leakage power are small and the proposed filter data cache architecture does not hurt performance. Next, this dissertation exploits the localities of write operations and introduces a cache design called a write buffer-oriented cache to achieve energy-efficiency. Observing that write operations are very likely to be merged in the write buffer with a write-through policy because of their high localities. We construct the proposed write buffer-oriented cache architecture to utilize two schemes. First, the write operations update the write buffer but not the L1 data cache, which is updated later by the write buffer after the write operations are merged. Write merging significantly reduces write accesses to the data cache and, consequently, energy consumption. Second, we further reduce energy consumption in the write buffer by filtering out unnecessary read accesses to the write buffer using a read hit predictor. In this dissertation, we also show that the proposed write buffer-oriented cache architecture is applicable to conventional embedded processors that support both write-through and write-back policies. This dissertation also studies tag comparison issues. Conventional cache tag matching is based on addresses to identify requested data. However, this address-based tagging scheme is not efficient because unnecessarily many tag bits are used. Previous studies show that TLBIT (TLB Index-based Tagging) can be used in the instruction cache because there are not many different tags at a moment due to spatial locality, and those tags are conventionally captured by TLBs. TLB indexes are added in each entry of the TLB, which are employed as tags in the cache to identify requested data. The TLBIT reduces the number of required tag bits (i.e., tag array size), thereby reducing cache energy consumption and area. However, directly adopting the TLBIT is not effective for data caches because it incurs large overheads in terms of erformance and energy consumption because of cache line searches and invalidations. To achieve true potential of the TLBIT, we propose three novel techniques: search zone, c-LRU and TLB buffer. Search zone reduces unnecessary cache line searches and c-LRU reduces cache line invalidations. TLB buffer prevents immediate cache line invalidations on TLB misses. Moreover, we propose an adaptive physical address fetch scheme to achieve energy efficiency in the TLB. The proposed techniques reduces energy consumption of the TLB and data caches with small impacts on performance.
Advisors
Kim, Soontaeresearcher김순태researcher
Description
한국과학기술원 :전산학과,
Publisher
한국과학기술원
Issue Date
2015
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학과, 2015.2 ,[ix, 109 p. :]

Keywords

Memory; Cache; Low-power; 메모리; 캐시; 저전력

URI
http://hdl.handle.net/10203/222391
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=657597&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0