On the Quantum versus Classical Learnability of Discrete Distributions

Here we study the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework. More specifically we consider the following task: Given samples from some unknown discrete probability distribution, output with high probability...

Full description

Saved in:
Bibliographic Details
Published inQuantum (Vienna, Austria) Vol. 5; p. 417
Main Authors Sweke, Ryan, Seifert, Jean-Pierre, Hangleiter, Dominik, Eisert, Jens
Format Journal Article
LanguageEnglish
Published Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften 23.03.2021
Online AccessGet full text

Cover

Loading…
More Information
Summary:Here we study the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework. More specifically we consider the following task: Given samples from some unknown discrete probability distribution, output with high probability an efficient algorithm for generating new samples from a good approximation of the original distribution. Our primary result is the explicit construction of a class of discrete probability distributions which, under the decisional Diffie-Hellman assumption, is provably not efficiently PAC learnable by a classical generative modelling algorithm, but for which we construct an efficient quantum learner. This class of distributions therefore provides a concrete example of a generative modelling problem for which quantum learners exhibit a provable advantage over classical learning algorithms. In addition, we discuss techniques for proving classical generative modelling hardness results, as well as the relationship between the PAC learnability of Boolean functions and the PAC learnability of discrete probability distributions.
ISSN:2521-327X
2521-327X
DOI:10.22331/q-2021-03-23-417