Defining Standard Strategies for Quantum Benchmarks
As quantum computers grow in size and scope, a question of great importance is how best to benchmark performance. Here we define a set of characteristics that any benchmark should follow -- randomized, well-defined, holistic, device independent -- and make a distinction between benchmarks and diagno...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | As quantum computers grow in size and scope, a question of great importance
is how best to benchmark performance. Here we define a set of characteristics
that any benchmark should follow -- randomized, well-defined, holistic, device
independent -- and make a distinction between benchmarks and diagnostics. We
use Quantum Volume (QV) [1] as an example case for clear rules in benchmarking,
illustrating the implications for using different success statistics, as in
Ref. [2]. We discuss the issue of benchmark optimizations, detail when those
optimizations are appropriate, and how they should be reported. Reporting the
use of quantum error mitigation techniques is especially critical for
interpreting benchmarking results, as their ability to yield highly accurate
observables comes with exponential overhead, which is often omitted in
performance evaluations. Finally, we use application-oriented and mirror
benchmarking techniques to demonstrate some of the highlighted optimization
principles, and introduce a scalable mirror quantum volume benchmark. We
elucidate the importance of simple optimizations for improving benchmarking
results, and note that such omissions can make a critical difference in
comparisons. For example, when running mirror randomized benchmarking, we
observe a reduction in error per qubit from 2% to 1% on a 26-qubit circuit with
the inclusion of dynamic decoupling. |
---|---|
DOI: | 10.48550/arxiv.2303.02108 |