Nationality Bias in Text Generation

Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models. This paper examines how a text generation model, GPT-2, accentuates pre-existing societal biases about country-based...

Full description

Saved in:
Bibliographic Details
Main Authors Venkit, Pranav Narayanan, Gautam, Sanjana, Panchanadikar, Ruchi, Huang, Ting-Hao 'Kenneth', Wilson, Shomir
Format Journal Article
LanguageEnglish
Published 05.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models. This paper examines how a text generation model, GPT-2, accentuates pre-existing societal biases about country-based demonyms. We generate stories using GPT-2 for various nationalities and use sensitivity analysis to explore how the number of internet users and the country's economic status impacts the sentiment of the stories. To reduce the propagation of biases through large language models (LLM), we explore the debiasing method of adversarial triggering. Our results show that GPT-2 demonstrates significant bias against countries with lower internet users, and adversarial triggering effectively reduces the same.
DOI:10.48550/arxiv.2302.02463