Regularizers to the rescue: fighting overfitting in deep learning-based side-channel analysis

Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long...

Full description

Saved in:
Bibliographic Details
Published inJournal of cryptographic engineering Vol. 14; no. 4; pp. 609 - 629
Main Authors Rezaeezade, Azade, Batina, Lejla
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.11.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying 4 powerful and easy-to-use regularization techniques to 8 combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L 1 , L 2 , dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L 1 and L 2 are the most effective. Finally, if training time matters, early stopping is the best technique.
ISSN:2190-8508
2190-8516
DOI:10.1007/s13389-024-00361-5