Deep Template-Based Watermarking
Traditional watermarking algorithms have been extensively studied. As an important type of watermarking schemes, template-based approaches maintain a very high embedding rate. In such scheme, the message is often represented by some dedicatedly designed templates, and then the message embedding proc...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 31; no. 4; pp. 1436 - 1451 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Traditional watermarking algorithms have been extensively studied. As an important type of watermarking schemes, template-based approaches maintain a very high embedding rate. In such scheme, the message is often represented by some dedicatedly designed templates, and then the message embedding process is carried out by additive operation with the templates and the host image. To resist potential distortions, these templates often need to contain some special statistical features so that they can be successfully recovered at the extracting side. But in existing methods, most of these features are handcrafted and too simple, thus making them not robust enough to resist serious distortions unless very strong and obvious templates are used. Inspired by the powerful feature learning capacity of deep neural network, we propose the first deep template-based watermarking algorithm in this paper. Specifically, at the embedding side, we first design two new templates for message embedding and locating, which is achieved by leveraging the special properties of human visual system, i.e. , insensitivity to specific chrominance components, the proximity principle and the oblique effect. At the extracting side, we propose a novel two-stage deep neural network, which consists of an auxiliary enhancing sub-network and a classification sub-network. Thanks to the power of deep neural networks, our method achieves both digital editing resilience and camera shooting resilience based on typical application scenarios. Through extensive experiments, we demonstrate that the proposed method can achieve much better robustness than existing methods while guaranteeing the original visual quality. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2020.3009349 |