Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case

We analyze the learning problem of one-hidden-layer nonoverlapping convolutional neural networks with the rectified linear unit (ReLU) activation function from the perspective of model estimation. The training outputs are assumed to be generated by the neural network with the unknown ground-truth pa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 6; pp. 2622 - 2635
Main Authors Zhang, Shuai, Wang, Meng, Xiong, Jinjun, Liu, Sijia, Chen, Pin-Yu
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.06.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…