An effective cache scheduling scheme for improving the performance in multi-threaded processors

Although in a multi-threaded processor, the processor may execute more than one process simultaneously to maximize the overall throughput of the system, the executing processes may compete with each other in using shared caches of the processor. This can seriously affect the average performance of t...

Full description

Saved in:
Bibliographic Details
Published inJournal of systems architecture Vol. 59; no. 4-5; pp. 271 - 278
Main Authors Lo, Shi-Wu, Lam, Kam-Yiu, Huang, Wen-Yan, Qiu, Sheng-Feng
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 01.04.2013
Elsevier Sequoia S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Although in a multi-threaded processor, the processor may execute more than one process simultaneously to maximize the overall throughput of the system, the executing processes may compete with each other in using shared caches of the processor. This can seriously affect the average performance of the processes as the probability of cache hit for each process could be lowered. In this paper, we propose a new algorithm called the sharable cache partitioning algorithm (ShaParti), for scheduling the processor caches amongst co-running processes. In ShaParti, each executing process has its own cache and a priority scheme is designed for them to share the caches belonging to other executing processes. The performance goals of ShaParti are to improve the cache hit rates of the processes and at the same time the cache miss rates of other concurrent processes will not be lowered compared with the case in which each process has its own cache. Extensive experiments have been performed to illustrate the effectiveness of ShaParti in improving the performance in accessing shared processor caches.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:1383-7621
1873-6165
DOI:10.1016/j.sysarc.2012.11.005