Large Language Models and Logical Reasoning

In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiabi...

Full description

Saved in:
Bibliographic Details
Published inEncyclopedia (Basel, Switzerland) Vol. 3; no. 2; pp. 687 - 697
Main Author Friedman, Robert
Format Journal Article
LanguageEnglish
Published Naples MDPI AG 30.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiability, reliability, and applicability. Another reason for this preference is that natural language is not designed for an efficient and reliable flow of information and knowledge, but is instead designed as an evolutionary adaptation as formed from a prior set of natural constraints. As a formally structured language, logical statements are also more interpretable. They may be informally constructed in the form of a natural language statement, but a formalized logical statement is expected to follow a stricter set of rules, such as with the use of symbols for representing the logic-based operators that connect multiple simple statements and form verifiable propositions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2673-8392
2673-8392
DOI:10.3390/encyclopedia3020049