On Generalization Bounds for Deep Compound Gaussian Neural Networks

Algorithm unfolding or unrolling is the technique of constructing a deep neural network (DNN) from an iterative algorithm. Unrolled DNNs often provide better interpretability and superior empirical performance over standard DNNs in signal estimation tasks. An important theoretical question, which ha...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Lyons, Carter, Raj, Raghu G, Cheney, Margaret
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Algorithm unfolding or unrolling is the technique of constructing a deep neural network (DNN) from an iterative algorithm. Unrolled DNNs often provide better interpretability and superior empirical performance over standard DNNs in signal estimation tasks. An important theoretical question, which has only recently received attention, is the development of generalization error bounds for unrolled DNNs. These bounds deliver theoretical and practical insights into the performance of a DNN on empirical datasets that are distinct from, but sampled from, the probability density generating the DNN training data. In this paper, we develop novel generalization error bounds for a class of unrolled DNNs that are informed by a compound Gaussian prior. These compound Gaussian networks have been shown to outperform comparative standard and unfolded deep neural networks in compressive sensing and tomographic imaging problems. The generalization error bound is formulated by bounding the Rademacher complexity of the class of compound Gaussian network estimates with Dudley's integral. Under realistic conditions, we show that, at worst, the generalization error scales \(\mathcal{O}(n\sqrt{\ln(n)})\) in the signal dimension and \(\mathcal{O}((\)Network Size\()^{3/2})\) in network size.
ISSN:2331-8422