Hybrid Quantum-Classical Reinforcement Learning in Latent Observation Spaces
Recent progress in quantum machine learning has sparked interest in using quantum methods to tackle classical control problems via quantum reinforcement learning. However, the classical reinforcement learning environments often scale to high dimensional problem spaces, which represents a challenge f...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
28.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent progress in quantum machine learning has sparked interest in using quantum methods to tackle classical control problems via quantum reinforcement learning. However, the classical reinforcement learning environments often scale to high dimensional problem spaces, which represents a challenge for the limited and costly resources available for quantum agent implementations. We propose to solve this dimensionality challenge by a classical autoencoder and a quantum agent together, where a compressed representation of observations is jointly learned in a hybrid training loop. The latent representation of such an autoencoder will serve as a tailored observation space best suited for both the control problem and the QPU architecture, aligning with the agent's requirements. A series of numerical experiments are designed for a performance analysis of the latent-space learning method. Results are presented for different control problems and for both photonic (continuous-variable) and qubit-based agents, to show how the QNN learning process is improved by the joint training. |
---|---|
ISSN: | 2331-8422 |