Improved Reinforcement Learning in Asymmetric Real-time Strategy Games via Strategy Diversity

We investigate the use of artificial intelligence (AI)-based techniques in learning to play a 2-player, real-time strategy (RTS) game called Hunting-of-the-Plark. The game is challenging to play for both humans and AI-based techniques because players cannot observe each other's moves while play...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of serious games Vol. 10; no. 1
Main Authors Raj Dasgupta, John Kliem
Format Journal Article
LanguageEnglish
Published Serious Games Society 01.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We investigate the use of artificial intelligence (AI)-based techniques in learning to play a 2-player, real-time strategy (RTS) game called Hunting-of-the-Plark. The game is challenging to play for both humans and AI-based techniques because players cannot observe each other's moves while playing the game and one player is at a disadvantage due to the asymmetric nature of the game rules. We analyze the performance of different deep reinforcement learning algorithms to train software agents that can play the game. Existing reinforcement learning techniques for RTS games enable players to converge towards an equilibrium outcome of the game but usually do not facilitate further exploration of techniques to exploit and defeat the opponent. To address this shortcoming, we investigate techniques including self-play and strategy diversity that can be used by players to improve their performance beyond the equilibrium outcome. We observe that when players use self-play, their number of wins begins to cycle around an equilibrium value as each player quickly learns to outwit and defeat its opponent and vice-versa. Finally, we show that strategy diversity could be used as an effective means to alleviate the performance of the disadvantaged player caused by the asymmetric nature of the game.
ISSN:2384-8766
DOI:10.17083/ijsg.v10i1.548