Efficient deep CNN-based gender classification using Iris wavelet scattering

Recognition of gender from iris images can be considered a texture classification task in which a classification model discriminates iris textures of male and female subjects. Although many researchers have proposed efficient iris texture classification methods that rely on deep features or employ F...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 82; no. 12; pp. 19041 - 19065
Main Authors Aryanmehr, Saeed, Boroujeni, Farsad Zamani
Format Journal Article
LanguageEnglish
Published New York Springer US 01.05.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recognition of gender from iris images can be considered a texture classification task in which a classification model discriminates iris textures of male and female subjects. Although many researchers have proposed efficient iris texture classification methods that rely on deep features or employ Fourier and wavelet transforms, several issues have still been reported in the literature. On the one hand, it is difficult to discriminate the details of iris textures using the features extracted by traditional frequency domain transforms. On the other hand, in different imaging conditions, small changes in pupil diameter or head rotations result in the translation and deformation of the iris texture and inaccurate classification results. To overcome these challenges, the current study proposes an approach that employs a feature extraction method based on a wavelet scattering transform comparable with deep features extracted from raw image data using convolutional neural networks. In the proposed method, the scattering coefficients are extracted from each RGB channel, followed by applying the principal component analysis (PCA) to reduce the extracted features’ dimensionality. These features are used to train a convolutional neural network. The current paper compares the deep feature vectors extracted from raw RGB images against features obtained from the wavelet scattering transform. This comparison is made according to the performance results obtained from a fine-tuned multi-layer perceptron (MLP) model trained by both feature sets. Experiments conducted on CVBL and UTIRIS databases indicate that using a wavelet scattering transform and extracting second-order features can significantly enhance the performance of the iris-based gender classification in comparison to deep features achieved from applying a deep neural network to raw pixel information. Moreover, our feature extraction method provides learnable features, thus eliminating the need for an additional training step to obtain deep features, as performed in the most recent state-of-the-art methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-022-14062-w