(An) adaptive sequential prefetching scheme in shared-memory multiprocessors공유메모리 다중처리기하에서 선인출 양을 조절하는 순차적 선인출 방식

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 361
  • Download : 0
The performance of processor has increased dramatically over the past decade and has outperformed that of main memory. As a result, the main memory access latency becomes an obstacle to achieve high performance computing. In large-scale multiprocessors with a general interconnection network, the program execution time significantly depends on the shared-memory access latency which consists of the memory access latency and the network latency. The shared-memory access latency reaches to tens to hundreds of processor cycles with the advent of very fast uniprocessors and massively parallel systems. The most part of this latency comes from the large network latency associated with the traversal of the processor-memory interconnect. Caches are quite effective to reduce and hide the main memory access latency in uniprocessor systems and the shared-memory access latency in shared-memory multiprocessors. However, the remained cache miss penalty is still a serious bottleneck to achieve high performance computing. Prefetching is an attractive scheme to reduce the cache miss penalty by exploiting the overlap of processor computations with data accesses. Especially for multiprocessors, cache miss penalty can be decreased significantly by overlapping the network latency of fetched block with those of prefetched blocks. Many prefetching schemes based on software or hardware have been proposed. Software prefetching schemes perform static program analysis and insert explicitly prefetch instructions into the program code, which increases the program size. In contrast, hardware prefetching schemes control the prefetch activities according to the program execution by only hardware. Several hardware prefetching schemes prefetch blocks if a regular access pattern is detected. These schemes require complex hardware to detect a regular access pattern. Prefetch on misses [29] is a simple hardware scheme, but needs a miss to prefetch one block. Thus this scheme reduces the miss rate ...
Advisors
Maeng, Seung-Ryoulresearcher맹승렬researcher
Description
한국과학기술원 : 전산학과,
Publisher
한국과학기술원
Issue Date
1998
Identifier
134781/325007 / 000935363
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학과, 1998.2, [ viii, 78 p. ]

Keywords

The prefetching degree; Sequential prefetching; Shared-memory multiprocessors; Sequential streams; 순차적 스트림; 선인출 양; 순차적 선인출; 다중처리기

URI
http://hdl.handle.net/10203/33103
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=134781&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0