Reinforcement learning algorithms: A brief survey

•RL can be used to solve problems involving sequential decision-making.•RL is based on trial-and-error learning through rewards and punishments.•The ultimate goal of an RL agent is to maximize cumulative reward.•RL agent tries to learn the optimal value and policy functions.•DNN-based function appro...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 231; p. 120495
Main Authors Shakya, Ashish Kumar, Pillai, Gopinatha, Chakrabarty, Sohom
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 30.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•RL can be used to solve problems involving sequential decision-making.•RL is based on trial-and-error learning through rewards and punishments.•The ultimate goal of an RL agent is to maximize cumulative reward.•RL agent tries to learn the optimal value and policy functions.•DNN-based function approximation is used to approximate the value and policy. Reinforcement Learning (RL) is a machine learning (ML) technique to learn sequential decision-making in complex problems. RL is inspired by trial-and-error based human/animal learning. It can learn an optimal policy autonomously with knowledge obtained by continuous interaction with a stochastic dynamical environment. Problems considered virtually impossible to solve, such as learning to play video games just from pixel information, are now successfully solved using deep reinforcement learning. Without human intervention, RL agents can surpass human performance in challenging tasks. This review gives a broad overview of RL, covering its fundamental principles, essential methods, and illustrative applications. The authors aim to develop an initial reference point for researchers commencing their research work in RL. In this review, the authors cover some fundamental model-free RL algorithms and pathbreaking function approximation-based deep RL (DRL) algorithms for complex uncertain tasks with continuous action and state spaces, making RL useful in various interdisciplinary fields. This article also provides a brief review of model-based and multi-agent RL approaches. Finally, some promising research directions for RL are briefly presented.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.120495