Language and Intelligence

This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his sem...

Full description

Saved in:
Bibliographic Details
Published inMinds and machines (Dordrecht) Vol. 31; no. 4; pp. 471 - 486
Main Author Montemayor, Carlos
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.12.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” to queries through a selective kind of pattern recognition, because they must also jointly attend and act with other speakers in order to count as genuinely intelligent and trustworthy. This presents a challenge for systems like GPT-3, because representing the world in a way that makes conversational common ground salient is an essentially collective task that we can only achieve jointly with other speakers. Thus, the main difficulty for any artificially intelligent model of conversation is to account for the communicational intentions and motivations of a speaker through joint attention. These joint motivations and intentions seem to be completely absent from the standard way in which systems like GPT-3 and other artificial intelligent systems work. This is not merely a theoretical issue. Since GPT-3 and future iterations of similar systems will likely be available for commercial use through application programming interfaces, caution is needed regarding the risks created by these systems, which pass for “intelligent” but have no genuine communicational intentions, and can thereby produce fake and unreliable linguistic exchanges.
ISSN:0924-6495
1572-8641
DOI:10.1007/s11023-021-09568-5