SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
While vision-language models (VLMs) have achieved remarkable performance improvements recently, there is growing evidence that these models also posses harmful biases with respect to social attributes such as gender and race. Prior studies have primarily focused on probing such bias attributes indiv...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
09.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | While vision-language models (VLMs) have achieved remarkable performance improvements recently, there is growing evidence that these models also posses harmful biases with respect to social attributes such as gender and race. Prior studies have primarily focused on probing such bias attributes individually while ignoring biases associated with intersections between social attributes. This could be due to the difficulty of collecting an exhaustive set of image-text pairs for various combinations of social attributes. To address this challenge, we employ text-to-image diffusion models to produce counterfactual examples for probing intersectional social biases at scale. Our approach utilizes Stable Diffusion with cross attention control to produce sets of counterfactual image-text pairs that are highly similar in their depiction of a subject (e.g., a given occupation) while differing only in their depiction of intersectional social attributes (e.g., race & gender). Through our over-generate-then-filter methodology, we produce SocialCounterfactuals, a high-quality dataset containing 171k image-text pairs for probing intersectional biases related to gender, race, and physical characteristics. We conduct extensive experiments to demonstrate the usefulness of our generated dataset for probing and mitigating intersectional social biases in state-of-the-art VLMs. |
---|---|
AbstractList | While vision-language models (VLMs) have achieved remarkable performance improvements recently, there is growing evidence that these models also posses harmful biases with respect to social attributes such as gender and race. Prior studies have primarily focused on probing such bias attributes individually while ignoring biases associated with intersections between social attributes. This could be due to the difficulty of collecting an exhaustive set of image-text pairs for various combinations of social attributes. To address this challenge, we employ text-to-image diffusion models to produce counterfactual examples for probing intersectional social biases at scale. Our approach utilizes Stable Diffusion with cross attention control to produce sets of counterfactual image-text pairs that are highly similar in their depiction of a subject (e.g., a given occupation) while differing only in their depiction of intersectional social attributes (e.g., race & gender). Through our over-generate-then-filter methodology, we produce SocialCounterfactuals, a high-quality dataset containing 171k image-text pairs for probing intersectional biases related to gender, race, and physical characteristics. We conduct extensive experiments to demonstrate the usefulness of our generated dataset for probing and mitigating intersectional social biases in state-of-the-art VLMs. |
Author | Le, Tiep Gustavo Lujan Moreno Bhiwandiwalla, Anahita Howard, Phillip Madasu, Avinash Lal, Vasudev |
Author_xml | – sequence: 1 givenname: Phillip surname: Howard fullname: Howard, Phillip – sequence: 2 givenname: Avinash surname: Madasu fullname: Madasu, Avinash – sequence: 3 givenname: Tiep surname: Le fullname: Le, Tiep – sequence: 4 fullname: Gustavo Lujan Moreno – sequence: 5 givenname: Anahita surname: Bhiwandiwalla fullname: Bhiwandiwalla, Anahita – sequence: 6 givenname: Vasudev surname: Lal fullname: Lal, Vasudev |
BookMark | eNqNjU1rwkAURQdRqF__4YHrQJxETbqsWBQqCIpbecZn-mSc0bwZ6tZ_bsRuuuvqcrnncjqqaZ2lhmrrJBlGWar1m-qLnOI41uOJHo2StrqvXcFopi5YT9URCx_QyDusKrdnWwLaAyzZc4n-WRdPSqjw7CwaeJ3hg1FIgC1sWeol-kJbBiwJlu5ARuCH_Tf8dcDshueLIemp1rFWUv83u2rwOdtM59GlctdA4ncnF6raJjud5dkwzXWeJv-jHrz2U8I |
ContentType | Paper |
Copyright | 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection ProQuest Engineering Collection Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_28981492943 |
IEDL.DBID | BENPR |
IngestDate | Thu Oct 10 17:35:08 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_28981492943 |
OpenAccessLink | https://www.proquest.com/docview/2898149294?pq-origsite=%requestingapplication% |
PQID | 2898149294 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_2898149294 |
PublicationCentury | 2000 |
PublicationDate | 20240409 |
PublicationDateYYYYMMDD | 2024-04-09 |
PublicationDate_xml | – month: 04 year: 2024 text: 20240409 day: 09 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2024 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.530265 |
SecondaryResourceType | preprint |
Snippet | While vision-language models (VLMs) have achieved remarkable performance improvements recently, there is growing evidence that these models also posses harmful... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Datasets Gender Human bias Image quality Physical properties Race |
Title | SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples |
URI | https://www.proquest.com/docview/2898149294 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1dS8MwFL24FsE3P_FjjoC-Btsm7VpfhEnrEDuKqOxtpGkrA-k-MsEnwX_uTdopIuwxhLYhLffknN57D8Cl7wqmaQN1CymQoEQFjWTl01J6rAiCULjGhywdBcNnfj_2x63gptq0ynVMNIG6mEmtkV8hMQjxNO9F_Ga-oNo1Sv9dbS00OmB7yBQcC-xBPMoef1QWL-jjmZn9C7QGPZJdsDMxL5d7sFXW-7Btki6lOoCvpjZWl4Vrs2hhajnUNcl0b6T6lSDJJ-m0aYKBQyPeKZM6hW-fNBeTwRRxSJFpTV5MmTh9aBVIom3O3hTRSiv5-wwSfwjdFVgdwkUSP90O6Xrdk_bbUpPfnWBHYNWzujwGwiXz_Mqv8oAxzgs37zu8CBBrhND9YJwT6G660-nm6TPY8RDMTcZK1AVrtXwvzxGMV3kPOmFy12v3HUfpZ_wNWcGXJQ |
link.rule.ids | 783,787,12777,21400,33385,33756,43612,43817 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1dS8MwFL3ohuibn_gx9YK-FtcmzVZfBGW1ajv2MGVvJU1aKUg3lwm--s9N0k4RYY8ltClJuSf39J57AC59lxOTNjiuFFwnKIF0AlH4Ti48Ihnrc9f6kCVDFj3Tx4k_aQg31ZRVLmOiDdRyKgxHfqUTg74-zXsBvZm9O8Y1yvxdbSw01qFNicZqoxQP7384Fo_19ImZ_AuzFjvCbWiP-Cyf78BaXu3Chi25FGoPvmplrBGFG6tobpUc6hpHpjNS9Yo6xcekrFtg6EtL3SlbOKX3Huub8bbUKKSwrPDFisSduOEf0ZicvSk0PCv-nQMHn9z0BFb7cBEOxneRs3zvtPmyVPq7DuQAWtW0yg8BqSCeX_hFxgihVLpZr0sl00jDuekG0z2CzqonHa8ePofNaJzEafwwfDqBLU_Duq1dCTrQWsw_8lMNy4vszK79N_jZlpk |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=SocialCounterfactuals%3A+Probing+and+Mitigating+Intersectional+Social+Biases+in+Vision-Language+Models+with+Counterfactual+Examples&rft.jtitle=arXiv.org&rft.au=Howard%2C+Phillip&rft.au=Madasu%2C+Avinash&rft.au=Le%2C+Tiep&rft.au=Gustavo+Lujan+Moreno&rft.date=2024-04-09&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |