Formal Specification and Testing for Reinforcement Learning

The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the dev...

Full description

Saved in:
Bibliographic Details
Published inProceedings of ACM on programming languages Vol. 7; no. ICFP; pp. 125 - 158
Main Authors Varshosaz, Mahsa, Ghaffari, Mohsen, Johnsen, Einar Broch, Wąsowski, Andrzej
Format Journal Article
LanguageEnglish
Published New York, NY, USA ACM 30.08.2023
Subjects
Online AccessGet full text
ISSN2475-1421
2475-1421
DOI10.1145/3607835

Cover

Loading…
More Information
Summary:The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcement learning applications based on temporal difference learning, including SARSA and Q-learning. The entire development is rooted in functional programming methods; starting with pure specifications and denotational semantics, ending with property-based testing and using compositional interpreters for a domain-specific term language as a test oracle for concrete implementations. We demonstrate the usefulness of this testing method on a number of examples, and evaluate with mutation testing. We show that our test suite is effective in killing mutants (90% mutants killed for 75% of subject agents). More importantly, almost half of all mutants are killed by generic write-once-use-everywhere tests that apply to any reinforcement learning problem modeled using our library, without any additional effort from the programmer.
ISSN:2475-1421
2475-1421
DOI:10.1145/3607835