The Hadoop Distributed File System

The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage...

Full description

Saved in:
Bibliographic Details
Published in2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST) pp. 1 - 10
Main Authors Shvachko, K, Hairong Kuang, Radia, S, Chansler, R
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2010
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. We describe the architecture of HDFS and report on experience using HDFS to manage 25 petabytes of enterprise data at Yahoo!.
ISBN:1424471524
9781424471522
ISSN:2160-195X
2160-1968
DOI:10.1109/MSST.2010.5496972