Experience with dynamic resource provisioning of the CMS online cluster using a cloud overlay

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of...

Full description

Saved in:
Bibliographic Details
Published inEPJ Web of Conferences Vol. 214; p. 7017
Main Authors Andre, Jean-Marc, Behrens, Ulf, Branson, James, Brummer, Philipp, Chaze, Olivier, Cittolin, Sergio, da Silva Gomes, Diego, Darlea, Georgiana-Lavinia, Deldicque, Christian, Demiragli, Zeynep, Dobson, Marc, Doualot, Nicolas, Erhan, Samim, Richard Fulcher, Jonathan, Gigi, Dominique, Gladki, Maciej, Glege, Frank, Gomez-Ceballos, Guillelmo, Hegeman, Jeroen, Holzner, Andre, Lettrich, Michael, Mecionis, Audrius, Meijers, Frans, Meschi, Emilio, K.Mommsen, Remigius, Morovic, Srecko, O’Dell, Vivian, Orsini, Luciano, Papakrivopoulos, Ioannis, Paus, Christoph, Petrucci, Andrea, Pieri, Marco, Rabady, Dinyar, Racz, Attila, Rapsevicius, Valdas, Reis, Thomas, Sakulin, Hannes, Schwick, Christoph, Simeleviciu, Dainius, Stankevicius, Mantas, Vazquez Velez, Cristina, Wernet, Christian, Zejdl, Petr
Format Journal Article Conference Proceeding
LanguageEnglish
Published Les Ulis EDP Sciences 01.01.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
Bibliography:FERMILAB-CONF-19-549-CMS; CMS-CR-2018-398
AC02-07CH11359
USDOE Office of Science (SC), High Energy Physics (HEP)
ISSN:2100-014X
2101-6275
2100-014X
DOI:10.1051/epjconf/201921407017