Fingerprint Presentation Attack Detection Based on Local Features Encoding for Unknown Attacks

Fingerprint-based biometric systems have experienced a large development in the past. In spite of many advantages, they are still vulnerable to attack presentations (APs). Therefore, the task of determining whether a sample stems from a live subject (i.e., bona fide) or from an artificial replica is...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 9; pp. 5806 - 5820
Main Authors Gonzalez-Soler, Lazaro Janier, Gomez-Barrero, Marta, Chang, Leonardo, Perez-Suarez, Airel, Busch, Christoph
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fingerprint-based biometric systems have experienced a large development in the past. In spite of many advantages, they are still vulnerable to attack presentations (APs). Therefore, the task of determining whether a sample stems from a live subject (i.e., bona fide) or from an artificial replica is a mandatory requirement which has recently received a considerable attention. Nowadays, when the materials for the fabrication of the Presentation Attack Instruments (PAIs) have been used to train the Presentation Attack Detection (PAD) methods, the PAIs can be successfully identified in most cases. However, current PAD methods still face difficulties detecting PAIs built from unknown materials and/or unknown recepies, or acquired using different capture devices. To tackle this issue, we propose a new PAD technique based on three image representation approaches combining local and global information of the fingerprint. By transforming these representations into a common feature space, we can correctly discriminate bona fide from attack presentations in the aforementioned scenarios. The experimental evaluation of our proposal over the LivDet 2011 to 2019 databases, yielded error rates outperforming the top state-of-the-art results by up to 72% in the most challenging scenarios. In addition, the best representation achieved the best results in the LivDet 2019 competition (overall accuracy of 96.17%).
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3048756