A Deep Learning Method for the Security Vulnerability Study of Feed-Forward Physical Unclonable Functions

Authentication is critical for Internet-of-Things. The traditional approach of using cryptographic keys is subject to invasive attacks. Being unclonable even by the manufacturers, physical unclonable functions (PUFs) leverage integrated circuits’ manufacturing variations to produce responses unique...

Full description

Saved in:
Bibliographic Details
Published inArabian journal for science and engineering (2011) Vol. 49; no. 9; pp. 12291 - 12303
Main Authors Alkatheiri, Mohammed Saeed, Aseeri, Ahmad O., Zhuang, Yu
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.09.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Authentication is critical for Internet-of-Things. The traditional approach of using cryptographic keys is subject to invasive attacks. Being unclonable even by the manufacturers, physical unclonable functions (PUFs) leverage integrated circuits’ manufacturing variations to produce responses unique for individual devices, and hence are of great potential as security primitives. While physically unclonable, many PUFs were reported to be mathematically cloneable by machine learning-based modeling methods. The feed-forward arbiter PUFs (FF PUFs) are among the PUFs with strong resistance against machine learning attacks. Existing studies revealed that only a very small group of FF PUFs with special loop patterns had been broken, and the vast majority of FF PUFs are still secure against all machine learning attack methods tried so far. In this paper, we introduce a neural network that can successfully attack FF PUFs with any loop patterns, with training time even magnitudes lower than existing methods attacking PUFs of the restrictive loop patterns. Experimental results show that, on the one hand, FF PUFs are not secure against attacks even with a large number of complex feed-forward loops, hence susceptible to attacks by response-prediction-based malicious software. On the other hand, the new approach of designing problem-tailored attack methods points to a new way to identify PUF security risks which might be difficult to discover by general-purpose machine learning methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-023-08643-6