Managing ATLAS data on a petabyte-scale with DQ2

The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system, Don Quijote2 (DQ2), must manage tens of petabytes of experiment data per year, distributed globally via the LCG, OSG and N...

Full description

Saved in:
Bibliographic Details
Published inJournal of physics. Conference series Vol. 119; no. 6; p. 062017
Main Authors Branco, M, Cameron, D, Gaidioz, B, Garonne, V, Koblitz, B, Lassnig, M, Rocha, R, Salgado, P, Wenaus, T
Format Journal Article
LanguageEnglish
Published Bristol IOP Publishing 01.07.2008
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system, Don Quijote2 (DQ2), must manage tens of petabytes of experiment data per year, distributed globally via the LCG, OSG and NDGF computing grids, now commonly known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all experiment data for the ATLAS collaboration, which now comprises over 3000 scientists participating from more than 150 universities and laboratories in 34 countries. Fulfilling its primary requirement of providing a highly distributed, fault-tolerant and scalable architecture DQ2 was successfully upgraded from managing data on a terabyte-scale to managing data on a petabyte-scale. We present improvements and enhancements to DQ2 based on the increasing demands for ATLAS data management. We describe performance issues, architectural changes and implementation decisions, the current state of deployment in test and production as well as anticipated future improvements. Test results presented here show that DQ2 is capable of handling data up to and beyond the requirements of full-scale data-taking.
ISSN:1742-6596
1742-6588
1742-6596
DOI:10.1088/1742-6596/119/6/062017