Generative Model Adversarial Training for Deep Compressed Sensing
Deep compressed sensing assumes the data has sparse representation in a latent space, i.e., it is intrinsically of low-dimension. The original data is assumed to be mapped from a low-dimensional space through a low-to-high-dimensional generator. In this work, we propound how to design such a low-to-...
Saved in:
Published in | arXiv.org |
---|---|
Main Author | |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
20.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep compressed sensing assumes the data has sparse representation in a latent space, i.e., it is intrinsically of low-dimension. The original data is assumed to be mapped from a low-dimensional space through a low-to-high-dimensional generator. In this work, we propound how to design such a low-to-high dimensional deep learning-based generator suiting for compressed sensing, while satisfying robustness to universal adversarial perturbations in the latent domain. We also justify why the noise is considered in the latent space. The work is also buttressed with theoretical analysis on the robustness of the trained generator to adversarial perturbations. Experiments on real-world datasets are provided to substantiate the efficacy of the proposed \emph{generative model adversarial training for deep compressed sensing.} |
---|---|
ISSN: | 2331-8422 |