More Distinctively Black and Feminine Faces Lead to Increased Stereotyping in Vision-Language Models
Vision Language Models (VLMs), exemplified by GPT-4V, adeptly integrate text and vision modalities. This integration enhances Large Language Models' ability to mimic human perception, allowing them to process image inputs. Despite VLMs' advanced capabilities, however, there is a concern th...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Vision Language Models (VLMs), exemplified by GPT-4V, adeptly integrate text
and vision modalities. This integration enhances Large Language Models' ability
to mimic human perception, allowing them to process image inputs. Despite VLMs'
advanced capabilities, however, there is a concern that VLMs inherit biases of
both modalities in ways that make biases more pervasive and difficult to
mitigate. Our study explores how VLMs perpetuate homogeneity bias and trait
associations with regards to race and gender. When prompted to write stories
based on images of human faces, GPT-4V describes subordinate racial and gender
groups with greater homogeneity than dominant groups and relies on distinct,
yet generally positive, stereotypes. Importantly, VLM stereotyping is driven by
visual cues rather than group membership alone such that faces that are rated
as more prototypically Black and feminine are subject to greater stereotyping.
These findings suggest that VLMs may associate subtle visual cues related to
racial and gender groups with stereotypes in ways that could be challenging to
mitigate. We explore the underlying reasons behind this behavior and discuss
its implications and emphasize the importance of addressing these biases as
VLMs come to mirror human perception. |
---|---|
DOI: | 10.48550/arxiv.2407.06194 |