Robust Multi-Classifier for Camera Model Identification Based on Convolution Neural Network
With the prevalence of adopting data-driven convolution neural network (CNN)-based algorithms into the community of digital image forensics, some novel supervised classifiers have indeed increasingly sprung up with nearly perfect detection rate, compared with the conventional supervised mechanism. T...
Saved in:
Published in | IEEE access Vol. 6; pp. 24973 - 24982 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.01.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the prevalence of adopting data-driven convolution neural network (CNN)-based algorithms into the community of digital image forensics, some novel supervised classifiers have indeed increasingly sprung up with nearly perfect detection rate, compared with the conventional supervised mechanism. The goal of this paper is to investigate a robust multi-classifier for dealing with one of the image forensic problems, referred to as source camera identification. The main contributions of this paper are threefold: 1) by mainly analyzing the image features characterizing different source camera models, we design an improved architecture of CNN for adaptively and automatically extracting characteristics, instead of hand-crafted extraction; 2) the proposed efficient CNN-based multi-classifier is capable of simultaneously classifying the tested images acquired by a large scale of different camera models, instead of utilizing a binary classifier; and 3) numerical experiments show that our proposed multi-classifier can effectively classify different camera models while achieving an average accuracy of nearly 100% relying on majority voting, which indeed outperforms some prior arts; meanwhile, its robustness has been verified by considering that the images are attacked by post-processing such as JPEG compression and noise adding. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2018.2832066 |