DYNAMIC SHARED CACHE PARTITION FOR WORKLOAD WITH LARGE CODE FOOTPRINT

An embodiment of an integrated circuit may comprise a core, a first level core cache memory coupled to the core, a shared core cache memory coupled to the core, a first cache controller coupled to the core and communicatively coupled to the first level core cache memory, a second cache controller co...

Full description

Saved in:
Bibliographic Details
Main Authors Subramoney, Sreenivas, Kallurkar, Prathmesh, Nori, Anant Vithal
Format Patent
LanguageEnglish
Published 23.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An embodiment of an integrated circuit may comprise a core, a first level core cache memory coupled to the core, a shared core cache memory coupled to the core, a first cache controller coupled to the core and communicatively coupled to the first level core cache memory, a second cache controller coupled to the core and communicatively coupled to the shared core cache memory, and circuitry coupled to the core and communicatively coupled to the first cache controller and the second cache controller to determine if a workload has a large code footprint, and, if so determined, partition N ways of the shared core cache memory into first and second chunks of ways with the first chunk of M ways reserved for code cache lines from the workload and the second chunk of N minus M ways reserved for data cache lines from the workload, where N and M are positive integer values and N minus M is greater than zero. Other embodiments are disclosed and claimed.
Bibliography:Application Number: US202017130698