Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems

The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the desi...

Full description

Saved in:
Bibliographic Details
Published in2020 Design, Automation & Test in Europe Conference & Exhibition (DATE) pp. 514 - 519
Main Authors Ganji, Fatemeh, Amir, Sarah, Tajik, Shahin, Forte, Domenic, Seifert, Jean-Pierre
Format Conference Proceeding
LanguageEnglish
Published EDAA 01.03.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.
ISSN:1558-1101
DOI:10.23919/DATE48585.2020.9116316