Compressed auto-encoder building block for deep learning network

Deep learning algorithm has been widely used in many area which is one of the most important representation learning algorithms in machine learning tasks. Deep learning network is stacked by the building blocks such as the restricted Boltzmann machine(RBM) and the auto-encoder, convolutional buildin...

Full description

Saved in:
Bibliographic Details
Published in2016 3rd International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS) pp. 131 - 136
Main Authors Qiying Feng, Chen, C. L. Philip, Long Chen
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning algorithm has been widely used in many area which is one of the most important representation learning algorithms in machine learning tasks. Deep learning network is stacked by the building blocks such as the restricted Boltzmann machine(RBM) and the auto-encoder, convolutional building block. After stacking the building blocks layers and layers, the improvement of the deep learning network would be notable. In this paper, we proposed a new deep learning building block that inspired by the auto-encoder, which is the compressed auto-encoder with fewer layers and parameters compared with the auto-encoder, and we put forward a bidirectional gradient decent method to update the parameters of this building block. As the experimental results show that improves the performance of the auto-encoder in accuracy of the reconstruction data. It keeps declining the error while the results of rbm or the auto-encoder becomes saturation, and some analysis are given in this paper.
DOI:10.1109/ICCSS.2016.7586437