Using GPU to Accelerate Cache Simulation

Caches play a major role in the performance of high speed computer systems. Trace driven simulator is the most widely used method to evaluate cache architectures. However, as the cache design moves to more complicated architectures, along with the size of the trace is becoming larger and larger. Tra...

Full description

Saved in:
Bibliographic Details
Published in2009 IEEE International Symposium on Parallel and Distributed Processing with Applications pp. 565 - 570
Main Authors Wan Han, Gao Xiaopeng, Wang Zhiqiang, Li Yi
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2009
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Caches play a major role in the performance of high speed computer systems. Trace driven simulator is the most widely used method to evaluate cache architectures. However, as the cache design moves to more complicated architectures, along with the size of the trace is becoming larger and larger. Traditional simulation methods are no longer practical due to their long simulation cycles. Several techniques have been proposed to reduce the simulation time of sequential trace driven simulation. This paper considers the use of generic GPU to accelerate cache simulation which exploits set partitioning as the main source of parallelism. We develop more efficient parallel simulation techniques by introducing more knowledge into the Compute Unified Device Architecture (CUDA) on the GPU. Our experimental result shows that the new algorithm gains 2.76 times performance improvement compared to traditional CPU based sequential algorithm.
ISBN:0769537472
9780769537474
ISSN:2158-9178
DOI:10.1109/ISPA.2009.51