Kubernetes for the Deep Underground Neutrino Experiment Data Acquisition

The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment based in the USA which is expected to start taking data in 2029. DUNE aims to precisely measure neutrino oscillation parameters by detecting neutrinos from the LBNF beamline (Fermilab) at the Far D...

Full description

Saved in:
Bibliographic Details
Published inEPJ Web of Conferences Vol. 295; p. 2017
Main Authors Lasorak, Pierre, Alves, Tiago, Crone, Gordon, Gamberini, Enrico, Hancock, Jonathan, King, Bonnie, Riehecky, Patrick, Tapper, Alexander, Thea, Alessandro
Format Journal Article Conference Proceeding
LanguageEnglish
Published Les Ulis EDP Sciences 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The Deep Underground Neutrino Experiment (DUNE) is a next-generation long-baseline neutrino experiment based in the USA which is expected to start taking data in 2029. DUNE aims to precisely measure neutrino oscillation parameters by detecting neutrinos from the LBNF beamline (Fermilab) at the Far Detector, 1,300 kilometres away, in South Dakota at the Sanford Underground Research Facility. The Far Detector will consist of four cryogenic Liquid Argon Time Projection Chamber detectors of 17 kT, each producing more than 1 TB/sec of data. The main requirements for the data acquisition system are the ability to run continuously for extended periods of time, with a 99% up-time requirement, and the functionality to record both beam neutrinos and low energy neutrinos from the explosion of a neighbouring supernova, should one occur during the lifetime of the experiment. The key challenges are the high data rates that the detectors generate and the deep underground environment, which places constraints on power and space. To overcome these challenges, DUNE plans to use a highly optimised C++ software suite and a server farm of about 110 nodes continuously running about two hundred multicore processes located close to the detector, 1.5 kilometres underground. Thirty nodes will be at the surface and will run around two hundred processes simultaneously. DUNE is studying the use of the Kubernetes framework to manage containerised workloads and take advantage of its resource definitions and high up-time services to run the DAQ system. Progress in deploying these systems at the CERN neutrino platform on the prototype DUNE experiments is reported.
ISSN:2100-014X
2101-6275
2100-014X
DOI:10.1051/epjconf/202429502017