Hybrid Quantum-Classical Reinforcement Learning in Latent Observation Spaces

Recent progress in quantum machine learning has sparked interest in using quantum methods to tackle classical control problems via quantum reinforcement learning. However, the classical reinforcement learning environments often scale to high dimensional problem spaces, which represents a challenge f...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Nagy, Dániel T R, Czabán, Csaba, Bakó, Bence, Hága, Péter, Kallus, Zsófia, Zimborás, Zoltán
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 28.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent progress in quantum machine learning has sparked interest in using quantum methods to tackle classical control problems via quantum reinforcement learning. However, the classical reinforcement learning environments often scale to high dimensional problem spaces, which represents a challenge for the limited and costly resources available for quantum agent implementations. We propose to solve this dimensionality challenge by a classical autoencoder and a quantum agent together, where a compressed representation of observations is jointly learned in a hybrid training loop. The latent representation of such an autoencoder will serve as a tailored observation space best suited for both the control problem and the QPU architecture, aligning with the agent's requirements. A series of numerical experiments are designed for a performance analysis of the latent-space learning method. Results are presented for different control problems and for both photonic (continuous-variable) and qubit-based agents, to show how the QNN learning process is improved by the joint training.
ISSN:2331-8422