BSP cost and scalability analysis for MapReduce operations

Summary Data abundance poses the need for powerful and easy‐to‐use tools that support processing large amounts of data. MapReduce has been increasingly adopted for over a decade by many companies, and more recently, it has attracted the attention of an increasing number of researchers in several are...

Full description

Saved in:
Bibliographic Details
Published inConcurrency and computation Vol. 28; no. 8; pp. 2503 - 2527
Main Authors Senger, Hermes, Gil-Costa, Veronica, Arantes, Luciana, Marcondes, Cesar A. C., Marín, Mauricio, Sato, Liria M., da Silva, Fabrício A.B.
Format Journal Article
LanguageEnglish
Published Blackwell Publishing Ltd 10.06.2016
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Summary Data abundance poses the need for powerful and easy‐to‐use tools that support processing large amounts of data. MapReduce has been increasingly adopted for over a decade by many companies, and more recently, it has attracted the attention of an increasing number of researchers in several areas. One main advantage is that the complex details of parallel processing, such as complex network programming, task scheduling, data placement, and fault tolerance, are hidden in a conceptually simple framework. MapReduce is supported by mature software technologies for deployment in data centers such as Hadoop. As MapReduce becomes popular for high‐performance applications, many questions arise concerning its performance and efficiency. In this paper, we demonstrated formally lower bounds on the isoefficiency function for MapReduce applications, when these applications can be modeled as BSP jobs. We also demonstrate how communication and synchronization costs can be dominant for MapReduce computations and discuss the conditions under which such scalability limits are valid. To our knowledge, this is the first study that demonstrates scalability bounds for MapReduce applications. We also discuss how some MapReduce implementations such as Hadoop can mitigate such costs to approach linear, or near‐to‐linear speedups. Copyright © 2015 John Wiley & Sons, Ltd.
Bibliography:istex:54D90040619BA89A56B4CC6C07AFBA54E9CCF613
ark:/67375/WNG-F7H4CT4M-F
ArticleID:CPE3628
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.3628