Benchmarking Classical and Quantum Optimizers for Quantum Simulator

Quantum machine learning is emerging as a new approach in the machine learning field that uses quantum properties, including superposition and entanglement. While the superposition offers low complexity when dealing with high-dimensional data, the entanglement has the potential to help us extract fe...

Full description

Saved in:
Bibliographic Details
Published inInternational Symposium on Computing and Networking (Online) pp. 238 - 244
Main Authors Hai, Vu Tuan, Duong, Le Vu Trung, Luan, Pham Hoai, Nakashima, Yasuhiko
Format Conference Proceeding
LanguageEnglish
Published IEEE 26.11.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Quantum machine learning is emerging as a new approach in the machine learning field that uses quantum properties, including superposition and entanglement. While the superposition offers low complexity when dealing with high-dimensional data, the entanglement has the potential to help us extract features better. Many classical and quantum optimizers have been proposed to train quantum machine learning models in the simulation environment. However, to the best of our knowledge, there is still no research investigating the best optimizer, which has the lowest resources and the minimal number of optimization steps, to achieve ideal performance. In this paper, we survey the most popular optimizers, such as Gradient Descent (GD), Adaptive Moment Estimation (Adam), and Quantum Natural Gradient Descent (QNG), through quantum compilation problems. We measure metrics that include the lowest cost value bound and wall time. We come to the conclusion that Adam is the most effective optimizer when achieving 1.94 times better in cost value and 1.10 times better in wall time at 9 qubits.
ISSN:2379-1896
DOI:10.1109/CANDAR64496.2024.00038