Vision Calorimeter for Anti-neutron Reconstruction: A Baseline
In high-energy physics, anti-neutrons ($\bar{n}$) are fundamental particles that frequently appear as final-state particles, and the reconstruction of their kinematic properties provides an important probe for understanding the governing principles. However, this confronts significant challenges ins...
Saved in:
Main Authors | , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In high-energy physics, anti-neutrons ($\bar{n}$) are fundamental particles
that frequently appear as final-state particles, and the reconstruction of
their kinematic properties provides an important probe for understanding the
governing principles. However, this confronts significant challenges
instrumentally with the electromagnetic calorimeter (EMC), a typical
experimental sensor but recovering the information of incident $\bar{n}$
insufficiently. In this study, we introduce Vision Calorimeter (ViC), a
baseline method for anti-neutron reconstruction that leverages deep learning
detectors to analyze the implicit relationships between EMC responses and
incident $\bar{n}$ characteristics. Our motivation lies in that energy
distributions of $\bar{n}$ samples deposited in the EMC cell arrays embody rich
contextual information. Converted to 2-D images, such contextual energy
distributions can be used to predict the status of $\bar{n}$ ($i.e.$, incident
position and momentum) through a deep learning detector along with pseudo
bounding boxes and a specified training objective. Experimental results
demonstrate that ViC substantially outperforms the conventional reconstruction
approach, reducing the prediction error of incident position by 42.81% (from
17.31$^{\circ}$ to 9.90$^{\circ}$). More importantly, this study for the first
time realizes the measurement of incident $\bar{n}$ momentum, underscoring the
potential of deep learning detectors for particle reconstruction. Code is
available at https://github.com/yuhongtian17/ViC. |
---|---|
DOI: | 10.48550/arxiv.2408.10599 |