With a Grain of SALT: Are LLMs Fair Across Social Dimensions?
This paper presents an analysis of biases in open-source Large Language Models (LLMs) across various genders, religions, and races. We introduce a methodology for generating a bias detection dataset using seven bias triggers: General Debate, Positioned Debate, Career Advice, Story Generation, Proble...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper presents an analysis of biases in open-source Large Language
Models (LLMs) across various genders, religions, and races. We introduce a
methodology for generating a bias detection dataset using seven bias triggers:
General Debate, Positioned Debate, Career Advice, Story Generation,
Problem-Solving, Cover-Letter Writing, and CV Generation. We use GPT-4o to
generate a diverse set of prompts for each trigger across various genders,
religious and racial groups. We evaluate models from Llama and Gemma family on
the generated dataset. We anonymise the LLM-generated text associated with each
group using GPT-4o-mini and do a pairwise comparison using GPT-4o-as-a-Judge.
To quantify bias in the LLM-generated text we use the number of wins and losses
in the pairwise comparison. Our analysis spans three languages, English,
German, and Arabic to explore how language influences bias manifestation. Our
findings reveal that LLMs exhibit strong polarization toward certain groups
across each category, with a notable consistency observed across models.
However, when switching languages, variations and anomalies emerge, often
attributable to cultural cues and contextual differences. |
---|---|
DOI: | 10.48550/arxiv.2410.12499 |