We'll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war

Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or 'narrow AI') relies on inductive...

Full description

Saved in:
Bibliographic Details
Published inJournal of strategic studies Vol. 47; no. 1; pp. 116 - 146
Main Authors Hunter, Cameron, Bowen, Bleddyn E.
Format Journal Article
LanguageEnglish
Published England Routledge 02.01.2024
Taylor & Francis Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or 'narrow AI') relies on inductive logic. The two forms of logic are not interchangeable, and therefore AI's limited utility in command - both tactical and strategic - is not something that can be solved by more data or more computing power. Many defence and government leaders are therefore proceeding with a false view of the nature of AI and of war itself.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0140-2390
1743-937X
DOI:10.1080/01402390.2023.2241648