Comparing Reinforcement Learning and Human Learning using the Game of Hidden Rules

Reliable real-world deployment of reinforcement learning (RL) methods requires a nuanced understanding of their strengths and weaknesses and how they compare to those of humans. Human-machine systems are becoming more prevalent and the design of these systems relies on a task-oriented understanding...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pulick, Eric, Menkov, Vladimir, Mintz, Yonatan, Kantor, Paul, Bier, Vicki
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 30.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Reliable real-world deployment of reinforcement learning (RL) methods requires a nuanced understanding of their strengths and weaknesses and how they compare to those of humans. Human-machine systems are becoming more prevalent and the design of these systems relies on a task-oriented understanding of both human learning (HL) and RL. Thus, an important line of research is characterizing how the structure of a learning task affects learning performance. While increasingly complex benchmark environments have led to improved RL capabilities, such environments are difficult to use for the dedicated study of task structure. To address this challenge we present a learning environment built to support rigorous study of the impact of task structure on HL and RL. We demonstrate the environment's utility for such study through example experiments in task structure that show performance differences between humans and RL algorithms.
ISSN:2331-8422
DOI:10.48550/arxiv.2306.17766