Coherent caching of data for high bandwidth scaling
A method, computer readable medium, and system are disclosed for a distributed cache that provides multiple processing units with fast access to a portion of data, which is stored in local memory. Thedistributed cache is composed of multiple smaller caches, and each of the smaller caches is associat...
Saved in:
Main Authors | , , , |
---|---|
Format | Patent |
Language | Chinese English |
Published |
24.03.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A method, computer readable medium, and system are disclosed for a distributed cache that provides multiple processing units with fast access to a portion of data, which is stored in local memory. Thedistributed cache is composed of multiple smaller caches, and each of the smaller caches is associated with at least one processing unit. In addition to a shared crossbar network through which data is transferred between processing units and the smaller caches, a dedicated connection is provided between two or more smaller caches that form a partner cache set. Transferring data through the dedicated connections reduces congestion on the shared crossbar network. Reducing congestion on the shared crossbar network increases the available bandwidth and allows the number of processing units to increase. A coherence protocol is defined for accessing data stored in the distributed cache and for transferring data between the smaller caches of a partner cache set.
公开了一种用于分布式高速缓存的方法、计算机可读介质和系统,其使多个处理单元快速访问存储在本地存储器中的一部分数据。分 |
---|---|
Bibliography: | Application Number: CN201910463512 |