Covert photo classification by deep convolutional neural networks
The increasing presence of image/video capture devices such as camera phones and surveillance cameras has become a ubiquitous element of providing convenience and improving security in modern life. On the other hand, the pervasiveness of such image/video capture devices raises growing privacy concer...
Saved in:
Published in | Machine vision and applications Vol. 28; no. 5-6; pp. 623 - 634 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.08.2017
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The increasing presence of image/video capture devices such as camera phones and surveillance cameras has become a ubiquitous element of providing convenience and improving security in modern life. On the other hand, the pervasiveness of such image/video capture devices raises growing privacy concerns. In this paper, we concentrate on a new visual privacy protection problem—covert photo classification. Covert photography means that the subject being photographed is purposely made unaware that he or she is photographed. A covert photo often contains information that is inherently sensitive and private to a person. If such photos are released on the public without approval, it may lead to serious negative consequences. We explore deep convolutional neural networks (DCNNs) to discover intricate structures of covert photos and automatically learn the representations for covert photo classification. Experimental results demonstrate that DCNN-based architectures which are fully end-to-end trained reach beyond previous experience-dependent hand-engineered feature methods in covert photo classification. The fusion of three DCNN-based architectures (AlexNet, VGGS, and GoogleNet) shows enhanced performance over individual networks on the Covert-2500 dataset and achieves an average classification rate (1-EER) of 0.925 which significantly outperforms the result (1-EER) of 0.8940 of hand-engineered feature methods. |
---|---|
ISSN: | 0932-8092 1432-1769 |
DOI: | 10.1007/s00138-017-0859-x |