CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
As language models (LMs) become increasingly powerful and widely used, it is important to quantify them for sociodemographic bias with potential for harm. Prior measures of bias are sensitive to perturbations in the templates designed to compare performance across social groups, due to factors such...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
23.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | As language models (LMs) become increasingly powerful and widely used, it is
important to quantify them for sociodemographic bias with potential for harm.
Prior measures of bias are sensitive to perturbations in the templates designed
to compare performance across social groups, due to factors such as low
diversity or limited number of templates. Also, most previous work considers
only one NLP task. We introduce Comprehensive Assessment of Language Models
(CALM) for robust measurement of two types of universally relevant
sociodemographic bias, gender and race. CALM integrates sixteen datasets for
question-answering, sentiment analysis and natural language inference. Examples
from each dataset are filtered to produce 224 templates with high diversity
(e.g., length, vocabulary). We assemble 50 highly frequent person names for
each of seven distinct demographic groups to generate 78,400 prompts covering
the three NLP tasks. Our empirical evaluation shows that CALM bias scores are
more robust and far less sensitive than previous bias measurements to
perturbations in the templates, such as synonym substitution, or to random
subset selection of templates. We apply CALM to 20 large language models, and
find that for 2 language model series, larger parameter models tend to be more
biased than smaller ones. The T0 series is the least biased model families, of
the 20 LLMs investigated here. The code is available at
https://github.com/vipulgupta1011/CALM. |
---|---|
DOI: | 10.48550/arxiv.2308.12539 |