Learning Complementary Policies for Human-AI Teams
Human-AI complementarity is important when neither the algorithm nor the human yields dominant performance across all instances in a given context. Recent work that explored human-AI collaboration has considered decisions that correspond to classification tasks. However, in many important contexts w...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
06.02.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human-AI complementarity is important when neither the algorithm nor the
human yields dominant performance across all instances in a given context.
Recent work that explored human-AI collaboration has considered decisions that
correspond to classification tasks. However, in many important contexts where
humans can benefit from AI complementarity, humans undertake course of action.
In this paper, we propose a framework for a novel human-AI collaboration for
selecting advantageous course of action, which we refer to as Learning
Complementary Policy for Human-AI teams (\textsc{lcp-hai}). Our solution aims
to exploit the human-AI complementarity to maximize decision rewards by
learning both an algorithmic policy that aims to complement humans by a routing
model that defers decisions to either a human or the AI to leverage the
resulting complementarity. We then extend our approach to leverage
opportunities and mitigate risks that arise in important contexts in practice:
1) when a team is composed of multiple humans with differential and potentially
complementary abilities, 2) when the observational data includes consistent
deterministic actions, and 3) when the covariate distribution of future
decisions differ from that in the historical data. We demonstrate the
effectiveness of our proposed methods using data on real human responses and
semi-synthetic, and find that our methods offer reliable and advantageous
performance across setting, and that it is superior to when either the
algorithm or the AI make decisions on their own. We also find that the
extensions we propose effectively improve the robustness of the human-AI
collaboration performance in the presence of different challenging settings. |
---|---|
DOI: | 10.48550/arxiv.2302.02944 |