An effective framework based on hybrid learning and kernel principal component analysis for face manipulation detection

Face manipulation is the process of modifying facial features in videos or images to produce a variety of artistic or deceptive effects. Face manipulation detection looks for altered or falsified visual media in order to differentiate between real and fake facial photographs or videos. The intricacy...

Full description

Saved in:
Bibliographic Details
Published inSignal, image and video processing Vol. 18; no. 5; pp. 4811 - 4820
Main Authors Thakur, Rahul, Rohilla, Rajesh
Format Journal Article
LanguageEnglish
Published London Springer London 01.07.2024
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1863-1703
1863-1711
DOI10.1007/s11760-024-03117-0

Cover

More Information
Summary:Face manipulation is the process of modifying facial features in videos or images to produce a variety of artistic or deceptive effects. Face manipulation detection looks for altered or falsified visual media in order to differentiate between real and fake facial photographs or videos. The intricacy of the techniques used makes it difficult to detect face manipulation, particularly in the context of technologies like DeepFake. This paper presents an efficient framework based on Hybrid Learning and Kernel Principal Component Analysis (KPCA) to extract more extensive and refined face-manipulating attributes. The proposed method utilizes the EfficientNetV2-L model for feature extraction, topped up with KPCA for feature dimensionality reduction, to distinguish between real and fake facial images. The proposed method is robust to various facial manipulations techniques such as identity swap, expression swap, attribute-based manipulation, and entirely synthesized faces. In this work, data augmentation is used to solve the problem of class imbalance present in the dataset. The proposed method has less execution time while achieving an accuracy of 99.3% and an F1 Score of 0.98 on the Diverse Fake Face Dataset (DFFD).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-024-03117-0