Safe Pretraining of Deep Language Models in a Synthetic Pseudo-Language

Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the Russi...

Full description

Saved in:
Bibliographic Details
Published inDoklady. Mathematics Vol. 108; no. Suppl 2; pp. S494 - S502
Main Authors Gorbacheva, T. E., Bondarenko, I. Y.
Format Journal Article
LanguageEnglish
Published Moscow Pleiades Publishing 01.12.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Absract This paper compares the pretraining of a transformer on natural language texts and on sentences of a synthetic pseudo-language. The artificial texts are automatically generated according to the rules written in a context-free grammar. The results of fine-tuning to complete tasks of the RussianSuperGLUE project statistically reliably showed that the models had the same scores. That is, the use of artificial texts facilitates the AI safety, because it can completely control the composition of the dataset. In addition, at the pretraining stage of a RoBERTa-like model, it is enough to learn recognizing only the syntactic and morphological patterns of the language, which can be successfully created in a fairly simple way, such as a context-free grammar.
ISSN:1064-5624
1531-8362
DOI:10.1134/S1064562423701636