Class-Variant Margin Normalized Softmax Loss for Deep Face Recognition

In deep face recognition, the commonly used softmax loss and its newly proposed variations are not yet sufficiently effective to handle the class imbalance and softmax saturation issues during the training process while extracting discriminative features. In this brief, to address both issues, we pr...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 10; pp. 4742 - 4747
Main Authors Zhang, Wanping, Chen, Yongru, Yang, Wenming, Wang, Guijin, Xue, Jing-Hao, Liao, Qingmin
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In deep face recognition, the commonly used softmax loss and its newly proposed variations are not yet sufficiently effective to handle the class imbalance and softmax saturation issues during the training process while extracting discriminative features. In this brief, to address both issues, we propose a class-variant margin (CVM) normalized softmax loss, by introducing a true-class margin and a false-class margin into the cosine space of the angle between the feature vector and the class-weight vector. The true-class margin alleviates the class imbalance problem, and the false-class margin postpones the early individual saturation of softmax. With negligible computational complexity increment during training, the new loss function is easy to implement in the common deep learning frameworks. Comprehensive experiments on the LFW, YTF, and MegaFace protocols demonstrate the effectiveness of the proposed CVM loss function.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2020.3017528