Feature-level fusion in personal identification

The existing studies of multi-modal and multi-view personal identification focused on combining the outputs of multiple classifiers at the decision level. In this study, we investigated the fusion at the feature level to combine multiple views and modals in personal identification. A new similarity...

Full description

Saved in:
Bibliographic Details
Published in2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) Vol. 1; pp. 468 - 473 vol. 1
Main Authors Yongsheng Gao, Maggs, M.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2005
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The existing studies of multi-modal and multi-view personal identification focused on combining the outputs of multiple classifiers at the decision level. In this study, we investigated the fusion at the feature level to combine multiple views and modals in personal identification. A new similarity measure is proposed, which integrates multiple 2D view features representing a visual identity of a 3D object seen from different viewpoints and from different sensors. The robustness to non-rigid distortions is achieved by the proximity correspondence manner in the similarity computation. The feasibility and capability of the proposed technique for personal identification were evaluated on multiple view human faces and palmprints. This research demonstrates that the feature-level fusion provides a new way to combine multiple modals and views for personal identification.
ISBN:0769523722
9780769523729
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2005.159