ExPregel: a new computational model for large-scale graph processing

Summary These days, large‐scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large‐scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to ot...

Full description

Saved in:
Bibliographic Details
Published inConcurrency and computation Vol. 27; no. 17; pp. 4954 - 4969
Main Authors Sagharichian, M., Naderi, H., Haghjoo, M.
Format Journal Article
LanguageEnglish
Published Blackwell Publishing Ltd 10.12.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Summary These days, large‐scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large‐scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time‐consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel‐like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large‐scale real‐world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.
Bibliography:ArticleID:CPE3482
istex:1E3BD23A04EC7AF599264F3A2D076B42F6D92788
ark:/67375/WNG-QZD6BJB3-0
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.3482