Using simple page placement policies to reduce the cost of cache fills in coherent shared-memory systems
The cost of a cache miss depends heavily on the location of the main memory that backs the missing line. For certain applications, this cost is a major factor in overall performance. We report on the utility of OS-based page placement as a mechanism to increase the frequency with which cache fills a...
Saved in:
Published in | Proceedings of 9th International Parallel Processing Symposium pp. 480 - 485 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE Comput. Soc. Press
1995
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The cost of a cache miss depends heavily on the location of the main memory that backs the missing line. For certain applications, this cost is a major factor in overall performance. We report on the utility of OS-based page placement as a mechanism to increase the frequency with which cache fills access local memory in distributed shared memory multiprocessors. Even with the very simple policy of first-use placement, we find significant improvements over round-robin placement for many applications on both hardware- and software-coherent systems. For most of our applications, first-use placement allows 35 to 75 percent of cache fills to be performed locally, resulting in performance improvements of up to 40 percent with respect to round-robin placement. We were surprised to find no performance advantage in more sophisticated policies, including page migration and page replication. In fact, in many cases the performance of our applications suffered under these policies.< > |
---|---|
ISBN: | 9780818670749 0818670746 |
DOI: | 10.1109/IPPS.1995.395974 |