INF-GAN: Generative Adversarial Network for Illumination Normalization of Finger-Vein Images

When images are acquired for finger-vein recognition, images with nonuniformity of illumination are often acquired due to varying thickness of fingers or nonuniformity of illumination intensity elements. Accordingly, the recognition performance is significantly reduced as the features being recogniz...

Full description

Saved in:
Bibliographic Details
Published inMathematics (Basel) Vol. 9; no. 20; p. 2613
Main Authors Hong, Jin Seong, Choi, Jiho, Kim, Seung Gu, Owais, Muhammad, Park, Kang Ryoung
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:When images are acquired for finger-vein recognition, images with nonuniformity of illumination are often acquired due to varying thickness of fingers or nonuniformity of illumination intensity elements. Accordingly, the recognition performance is significantly reduced as the features being recognized are deformed. To address this issue, previous studies have used image preprocessing methods, such as grayscale normalization or score-level fusion methods for multiple recognition models, which may improve performance in images with a low degree of nonuniformity of illumination. However, the performance cannot be improved drastically when certain parts of images are saturated due to a severe degree of nonuniformity of illumination. To overcome these drawbacks, this study newly proposes a generative adversarial network for the illumination normalization of finger-vein images (INF-GAN). In the INF-GAN, a one-channel image containing texture information is generated through a residual image generation block, and finger-vein texture information deformed by the severe nonuniformity of illumination is restored, thus improving the recognition performance. The proposed method using the INF-GAN exhibited a better performance compared with state-of-the-art methods when the experiment was conducted using two open databases, the Hong Kong Polytechnic University finger-image database version 1, and the Shandong University homologous multimodal traits finger-vein database.
ISSN:2227-7390
2227-7390
DOI:10.3390/math9202613