Spatiotemporal attention-based real-time video watermarking
As streaming media becomes prevalent, the demand for real-time video copyright protection has increased. Digital watermarking, a common copyright protection technique, has been widely used in copyright validation in various media. However, most of the existing video watermarking schemes follow the p...
Saved in:
Published in | Data mining and knowledge discovery Vol. 39; no. 5; p. 62 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.09.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1384-5810 1573-756X |
DOI | 10.1007/s10618-025-01129-z |
Cover
Loading…
Summary: | As streaming media becomes prevalent, the demand for real-time video copyright protection has increased. Digital watermarking, a common copyright protection technique, has been widely used in copyright validation in various media. However, most of the existing video watermarking schemes follow the paradigm of image watermarking, focusing mainly on the impact of watermark embedding on visual perception and its robustness in channel transmission while neglecting the importance of efficiency. To efficiently protect the digital rights of streaming media, this article proposes an Efficient deep video Watermarking model based on Spatiotemporal Attention mechanism and patch sampling (EWSA). A spatiotemporal attention mechanism is employed to enhance watermark imperceptibility by embedding the watermark into texture and insensitive regions. Additionally, embedding efficiency is improved by sampling patches of video frames rather than embedding watermarking in entire frames. The performance of our model on three datasets through goal-oriented, three-stage training validates the effectiveness of the proposed EWSA, which achieves embedding speed approximately
times faster than other deep watermarking methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1384-5810 1573-756X |
DOI: | 10.1007/s10618-025-01129-z |