Parallel IO Libraries for Managing HEP Experimental Data

The computing and storage requirements of the energy and intensity frontiers will grow significantly during the Run 4 & 5 and the HL-LHC era. Similarly, in the intensity frontier, with larger trig ger readouts during supernovae explosions, the Deep Underground Neutrino Experiment (DUNE) will hav...

Full description

Saved in:
Bibliographic Details
Published inEPJ Web of Conferences Vol. 295; p. 10008
Main Authors Bashyal, Amit, Jones, Christopher, Knoepfel, Kyle, Van Gemmeren, Peter, Sehrish, Saba, Byna, Suren
Format Journal Article Conference Proceeding
LanguageEnglish
Published Les Ulis EDP Sciences 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The computing and storage requirements of the energy and intensity frontiers will grow significantly during the Run 4 & 5 and the HL-LHC era. Similarly, in the intensity frontier, with larger trig ger readouts during supernovae explosions, the Deep Underground Neutrino Experiment (DUNE) will have unique computing challenges that could be addressed by the use of parallel and accelerated dataprocessing capabilities. Most of the requirements of the energy and intensity frontier experiments rely on increasing the role of high performance computing (HPC) in the HEP community. In this presentation, we will describe our ongoing efforts that are focused on using HPC resources for the next generation HEP experiments. The HEPCCE (High Energy Physics-Center for Computational Excellence) IOS (Input/Output and Storage) group has been developing approaches to map HEP data to the HDF5 , an IO library optimized for the HPC platforms to store the intermediate HEP data. The complex HEP data products are serialized using ROOT to allow for experiment independent general mapping approaches of the HEP data to the HDF5 format. The mapping approaches can be optimized for high performance parallel IO. Similarly, simpler data can be directly mapped into the HDF5, which can also be suitable for offloading into the GPUs directly. We will present our works on both complex and simple data model models.
ISSN:2100-014X
2101-6275
2100-014X
DOI:10.1051/epjconf/202429510008