Stochastic Gradient Descent Tricks

Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training s...

Full description

Saved in:
Bibliographic Details
Published inNeural Networks: Tricks of the Trade pp. 421 - 436
Main Author Bottou, Léon
Format Book Chapter
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2012
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…