When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games
The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, using such an agent involves the user exposing themselves to the risk that the agent may act in a way opposed to the user's goals. It i...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.07.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The actions of intelligent agents, such as chatbots, recommender systems, and
virtual assistants are typically not fully transparent to the user.
Consequently, using such an agent involves the user exposing themselves to the
risk that the agent may act in a way opposed to the user's goals. It is often
argued that people use trust as a cognitive shortcut to reduce the complexity
of such interactions. Here we formalise this by using the methods of
evolutionary game theory to study the viability of trust-based strategies in
repeated games. These are reciprocal strategies that cooperate as long as the
other player is observed to be cooperating. Unlike classic reciprocal
strategies, once mutual cooperation has been observed for a threshold number of
rounds they stop checking their co-player's behaviour every round, and instead
only check with some probability. By doing so, they reduce the opportunity cost
of verifying whether the action of their co-player was actually cooperative. We
demonstrate that these trust-based strategies can outcompete strategies that
are always conditional, such as Tit-for-Tat, when the opportunity cost is
non-negligible. We argue that this cost is likely to be greater when the
interaction is between people and intelligent agents, because of the reduced
transparency of the agent. Consequently, we expect people to use trust-based
strategies more frequently in interactions with intelligent agents. Our results
provide new, important insights into the design of mechanisms for facilitating
interactions between humans and intelligent agents, where trust is an essential
factor. |
---|---|
DOI: | 10.48550/arxiv.2007.11338 |