CIC-PIM: Trading spare computing power for memory space in graph processing
Shared-memory graph processing is usually more efficient than in a cluster in terms of cost effectiveness, ease of programming and runtime. However, the limited memory capacity of a single machine and the huge sizes of graphs restrains its applicability. Hence, it is imperative to reduce memory foot...
Saved in:
Published in | Journal of parallel and distributed computing Vol. 147; pp. 152 - 165 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Shared-memory graph processing is usually more efficient than in a cluster in terms of cost effectiveness, ease of programming and runtime. However, the limited memory capacity of a single machine and the huge sizes of graphs restrains its applicability. Hence, it is imperative to reduce memory footprint. We observe that index compression holds promise and propose CIC-PIM, a lightweight encoding with chunked index compression, to reduce the memory footprint and the runtime of graph algorithms. CIC-PIM aims for significant space saving, real random-access support and high cache efficiency by exploiting the ubiquitous power-law and sparseness features of large scale graphs. The basic idea is to divide index structures into chunks of appropriate size and compress the chunks with our lightweight fixed-length byte-aligned encoding. After CIC-PIM compression, two-fold larger graphs are processed with all data fit in memory, resulting in speedups or fast in-memory processing unattainable previously.
•Spare computing power can be traded for memory space in graph processing•Memory footprint of graph processing is significantly reduced by index compression•Parallel graph processing is faster if graph is compressed with lightweight encoding. |
---|---|
ISSN: | 0743-7315 1096-0848 |
DOI: | 10.1016/j.jpdc.2020.09.008 |