Deep Feature Fusion for Iris and Periocular Biometrics on Mobile Devices

The quality of iris images on mobile devices is significantly degraded due to hardware limitations and less constrained environments. Traditional iris recognition methods cannot achieve high identification rate using these low-quality images. To enhance the performance of mobile identification, we d...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information forensics and security Vol. 13; no. 11; pp. 2897 - 2912
Main Authors Zhang, Qi, Li, Haiqing, Sun, Zhenan, Tan, Tieniu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.11.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The quality of iris images on mobile devices is significantly degraded due to hardware limitations and less constrained environments. Traditional iris recognition methods cannot achieve high identification rate using these low-quality images. To enhance the performance of mobile identification, we develop a deep feature fusion network that exploits the complementary information presented in iris and periocular regions. The proposed method first applies maxout units into the convolutional neural networks (CNNs) to generate a compact representation for each modality and then fuses the discriminative features of two modalities through a weighted concatenation. The parameters of convolutional filters and fusion weights are simultaneously learned to optimize the joint representation of iris and periocular biometrics. To promote the iris recognition research on mobile devices under near-infrared (NIR) illumination, we publicly release the CASIA-Iris-Mobile-V1.0 database, which in total includes 11 000 NIR iris images of both eyes from 630 Asians. It is the largest NIR mobile iris database as far as we know. On the newly built CASIA-Iris-M1-S3 data set, the proposed method achieves 0.60% equal error rate and 2.32% false non-match rate at false match rate =10 -5 , which are obviously better than unimodal biometrics as well as traditional fusion methods. Moreover, the proposed model requires much fewer storage spaces and computational resources than general CNNs.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2018.2833033