A partitioned on-chip virtual cache for fast processors

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 266
  • Download : 0
In this paper, we propose a new virtual cache architecture that reduces memory latency by encompassing the merits of both direct-mapped cache and set-associative cache, The entire cache memory is divided into n banks, and the operating system assigns one of the banks to a process when it is created. Then, each process runs on the assigned bank, and the bank behaves like in a direct-mapped cache. If a cache miss occurs in the active home bank, then the data will be fetched either from other banks or from the main memory like a set-associative cache. A victim for cache replacement is selected from those that belong to a process which is most remote from being scheduled. Trace-driven simulations confirm that the new scheme removes almost as many conflict misses as does the set-associative cache, while cache access time is similar to a direct-mapped cache.
Publisher
ELSEVIER SCIENCE BV
Issue Date
1997-05
Language
English
Article Type
Article
Keywords

PERFORMANCE

Citation

JOURNAL OF SYSTEMS ARCHITECTURE, v.43, no.8, pp.519 - 531

ISSN
1383-7621
DOI
10.1016/S1383-7621(96)00123-3
URI
http://hdl.handle.net/10203/69002
Appears in Collection
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0