Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case
We analyze the learning problem of one-hidden-layer nonoverlapping convolutional neural networks with the rectified linear unit (ReLU) activation function from the perspective of model estimation. The training outputs are assumed to be generated by the neural network with the unknown ground-truth pa...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 32; no. 6; pp. 2622 - 2635 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.06.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!