Neural Network Compression for Reinforcement Learning Tasks

In real applications of Reinforcement Learning (RL), such as robotics, low latency and energy efficient inference is very desired. The use of sparsity and pruning for optimizing Neural Network inference, and particularly to improve energy and latency efficiency, is a standard technique. In this work...

Full description

Saved in:
Bibliographic Details
Main Authors Ivanov, Dmitry A, Larionov, Denis A, Maslennikov, Oleg V, Voevodin, Vladimir V
Format Journal Article
LanguageEnglish
Published 13.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In real applications of Reinforcement Learning (RL), such as robotics, low latency and energy efficient inference is very desired. The use of sparsity and pruning for optimizing Neural Network inference, and particularly to improve energy and latency efficiency, is a standard technique. In this work, we perform a systematic investigation of applying these optimization techniques for different RL algorithms in different RL environments, yielding up to a 400-fold reduction in the size of neural networks.
DOI:10.48550/arxiv.2405.07748