Fast neural network inference on FPGAs for triggering on long-lived particles at colliders

Abstract Experimental particle physics demands a sophisticated trigger and acquisition system capable to efficiently retain the collisions of interest for further investigation. Heterogeneous computing with the employment of FPGA cards may emerge as a trending technology for the triggering strategy...

Full description

Saved in:
Bibliographic Details
Published inMachine learning: science and technology Vol. 4; no. 4; pp. 45040 - 45048
Main Authors Coccaro, Andrea, Armando Di Bello, Francesco, Giagu, Stefano, Rambelli, Lucrezia, Stocchetti, Nicola
Format Journal Article
LanguageEnglish
Published Bristol IOP Publishing 01.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Experimental particle physics demands a sophisticated trigger and acquisition system capable to efficiently retain the collisions of interest for further investigation. Heterogeneous computing with the employment of FPGA cards may emerge as a trending technology for the triggering strategy of the upcoming high-luminosity program of the Large Hadron Collider at CERN In this context, we present two machine-learning algorithms for selecting events where neutral long-lived particles decay within the detector volume studying their accuracy and inference time when accelerated on commercially available Xilinx FPGA accelerator cards. The inference time is also confronted with a CPU- and GPU-based hardware setup. The proposed new algorithms are proven efficient for the considered benchmark physics scenario and their accuracy is found to not degrade when accelerated on the FPGA cards. The results indicate that all tested architectures fit within the latency requirements of a second-level trigger farm and that exploiting accelerator technologies for real-time processing of particle-physics collisions is a promising research field that deserves additional investigations, in particular with machine-learning models with a large number of trainable parameters.
Bibliography:MLST-101367.R1
ISSN:2632-2153
2632-2153
DOI:10.1088/2632-2153/ad087a