Deep D2C-Net: deep learning-based display-to-camera communications
In this paper, we propose Deep D2C-Net, a novel display-to-camera (D2C) communications technique using deep convolutional neural networks (DCNNs) for data embedding and extraction with images. The proposed technique consists of fully end-to-end encoding and decoding networks, which respectively prod...
Saved in:
Published in | Optics express Vol. 29; no. 8; pp. 11494 - 11511 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
12.04.2021
|
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, we propose Deep D2C-Net, a novel display-to-camera (D2C) communications technique using deep convolutional neural networks (DCNNs) for data embedding and extraction with images. The proposed technique consists of fully end-to-end encoding and decoding networks, which respectively produce high-quality data-embedded images and enable robust data acquisition in the presence of optical wireless channel. For encoding, Hybrid layers are introduced where the concurrent feature maps of the intended data and cover images are concatenated in a feed-forward fashion; for decoding, a simple convolutional neural network (CNN) is utilized. We conducted experiments in a real-world environment using a smartphone camera and a digital display with multiple parameters, such as transmission distance, capture angle, display brightness, and resolution of the camera. Experimental results prove that Deep D2C-Net outperforms the existing state-of-the-art algorithms in terms of peak signal-to-noise ratio (PSNR) and bit error rate (BER), while the data-embedded image displayed on the screen yields high visual quality for the human eye. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1094-4087 1094-4087 |
DOI: | 10.1364/OE.422591 |