Virtual cache architectures for reducing memory access latencies메모리 접근 지연시간을 감소시켜주는 가상 캐쉬구조

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 569
  • Download : 0
A cache memory system reduces memory access latencies by storing frequently used data in a fast storage whose size is usually much smaller that that of the main memory. As the speed of a processor increases, it becomes more difficult to design a cache memory that can satisfy the memory requests generated from the fast processor. Addressing cache memory by virtual addresses effectively reduces the cache access time by eliminating the time needed for address translation. The focus of this thesis is to propose virtual cache architectures to reduce the memory access latencies under diverse system architectures, i.e., on fast single processor systems, on shared memory multiprocessors. To achieve this purpose, we investigate the design issues and analyze problems concerned with performance of the virtual caches for those architectures. Then, we have developed three virtual cache schemes as follows in order to overcome the problems caused to be performance constraints. First, due to the fast cache access time, on-chip direct-mapped virtual caches are popular in fast single processor systems. A direct-mapped cache takes less time for accessing data than a set-associative cache because the time needed for selecting a cache line among the set is not necessary. The hit ratio of a direct-mapped cache, however, is lower due to the conflict misses caused by mapping multiple addresses to the same cache line. In order to solve the above problem, we propose a new virtual cache architecture whose access time is almost the same as the direct-mapped cache while the hit ratio is the same as the set-associative caches. The entire cache memory is divided into n banks, and each process is assigned to a bank. Then, each process runs on the assigned bank, and the cache behaves like a direct-mapped cache. A victim for cache replacement is selected from those that belong to a process which is most remote from being scheduled. Trace-driven simulations confirm that the new scheme removes a...
Advisors
Lee, Joon-Won이준원
Description
한국과학기술원 : 전산학과,
Publisher
한국과학기술원
Issue Date
1998
Identifier
134784/325007 / 000949046
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학과, 1998.2, [ ix, 118 p. ]

Keywords

Process scheduling; Memory access latencies; Virtual cache; Multiprogramming; 다중프로그래밍; 프로세스 스케쥴링; 메모리 접근지연시간; 가상캐쉬

URI
http://hdl.handle.net/10203/33106
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=134784&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0