Hiding Data and Detecting Hidden Data in Raw Video Components Using SIFT Points
Steganography is a science of hiding data in a medium whereas steganalysis is composed of attacks to find the hidden data in a cover medium. Since hiding data in a text file would disturb the coherence of the text or make it suspicious, systematically changing pixels of a visual is a more common met...
Saved in:
Published in | Tehnički vjesnik Vol. 27; no. 6; pp. 1741 - 1747 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Slavonski Baod
University of Osijek
01.12.2020
Josipa Jurja Strossmayer University of Osijek Faculty of Mechanical Engineering in Slavonski Brod, Faculty of Electrical Engineering in Osijek, Faculty of Civil Engineering in Osijek |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Steganography is a science of hiding data in a medium whereas steganalysis is composed of attacks to find the hidden data in a cover medium. Since hiding data in a text file would disturb the coherence of the text or make it suspicious, systematically changing pixels of a visual is a more common method. This process is performed on pixels that are spatially (and/or temporally, for video components) distant from each other so that a viewer's eye can be deceived. Online media are subject to modification such as compression, resolution change, visual modifications, and such which makes Scale Invariant Feature Transform (SIFT) points appropriate candidates for steganography. The current paper has two aims: the first is to propose a method that uses the SIFT points of a video for steganography. The second aim is to use Convolutional Neural Networks (CNN) as a steganalysis tool to detect the suspicious pixels of a video. The results indicate that the proposed steganography method is effective because it [y.sub.i]elds higher peak signal-to-noise ratio (PSNR = 95.41 dB) compared to other techniques described in cybersecurity literature, and CNN cannot detect hidden data with much success due to its 52% accuracy rate. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1330-3651 1848-6339 |
DOI: | 10.17559/TV-20190404155145 |