Reinforcing learning in Deep Belief Networks through nature-inspired optimization

Deep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introduci...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 108; p. 107466
Main Authors Roder, Mateus, Passos, Leandro Aparecido, de Rosa, Gustavo H., de Albuquerque, Victor Hugo C., Papa, João Paulo
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2021
Subjects
Online AccessGet full text
ISSN1568-4946
1872-9681
DOI10.1016/j.asoc.2021.107466

Cover

Loading…
More Information
Summary:Deep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network’s learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks. •Novel DBN with weight-based residual connections between layers.•Reinforcement and regularization of the information flow.•Application of metaheuristic optimization to fine-tune Res-DBN hyperparameters;
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2021.107466