Memory handling in the ATLAS submission system from job definition to sites limits

In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in...

Full description

Saved in:
Bibliographic Details
Published inJournal of physics. Conference series Vol. 898; no. 5; pp. 52004 - 52011
Main Authors Forti, A C, Walker, R, Maeno, T, Love, P, Rauschmayr, N, Filipcic, A, Di Girolamo, A
Format Journal Article
LanguageEnglish
Published Bristol IOP Publishing 01.10.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn't a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/898/5/052004