Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation
The rapid advancement of AI technologies yields numerous future impacts on individuals and society. Policymakers are tasked to react quickly and establish policies that mitigate those impacts. However, anticipating the effectiveness of policies is a difficult task, as some impacts might only be obse...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The rapid advancement of AI technologies yields numerous future impacts on
individuals and society. Policymakers are tasked to react quickly and establish
policies that mitigate those impacts. However, anticipating the effectiveness
of policies is a difficult task, as some impacts might only be observable in
the future and respective policies might not be applicable to the future
development of AI. In this work we develop a method for using large language
models (LLMs) to evaluate the efficacy of a given piece of policy at mitigating
specified negative impacts. We do so by using GPT-4 to generate scenarios both
pre- and post-introduction of policy and translating these vivid stories into
metrics based on human perceptions of impacts. We leverage an already
established taxonomy of impacts of generative AI in the media environment to
generate a set of scenario pairs both mitigated and non-mitigated by the
transparency policy in Article 50 of the EU AI Act. We then run a user study
(n=234) to evaluate these scenarios across four risk-assessment dimensions:
severity, plausibility, magnitude, and specificity to vulnerable populations.
We find that this transparency legislation is perceived to be effective at
mitigating harms in areas such as labor and well-being, but largely ineffective
in areas such as social cohesion and security. Through this case study we
demonstrate the efficacy of our method as a tool to iterate on the
effectiveness of policy for mitigating various negative impacts. We expect this
method to be useful to researchers or other stakeholders who want to brainstorm
the potential utility of different pieces of policy or other mitigation
strategies. |
---|---|
AbstractList | The rapid advancement of AI technologies yields numerous future impacts on
individuals and society. Policymakers are tasked to react quickly and establish
policies that mitigate those impacts. However, anticipating the effectiveness
of policies is a difficult task, as some impacts might only be observable in
the future and respective policies might not be applicable to the future
development of AI. In this work we develop a method for using large language
models (LLMs) to evaluate the efficacy of a given piece of policy at mitigating
specified negative impacts. We do so by using GPT-4 to generate scenarios both
pre- and post-introduction of policy and translating these vivid stories into
metrics based on human perceptions of impacts. We leverage an already
established taxonomy of impacts of generative AI in the media environment to
generate a set of scenario pairs both mitigated and non-mitigated by the
transparency policy in Article 50 of the EU AI Act. We then run a user study
(n=234) to evaluate these scenarios across four risk-assessment dimensions:
severity, plausibility, magnitude, and specificity to vulnerable populations.
We find that this transparency legislation is perceived to be effective at
mitigating harms in areas such as labor and well-being, but largely ineffective
in areas such as social cohesion and security. Through this case study we
demonstrate the efficacy of our method as a tool to iterate on the
effectiveness of policy for mitigating various negative impacts. We expect this
method to be useful to researchers or other stakeholders who want to brainstorm
the potential utility of different pieces of policy or other mitigation
strategies. |
Author | Barnett, Julia Diakopoulos, Nicholas Kieslich, Kimon |
Author_xml | – sequence: 1 givenname: Julia surname: Barnett fullname: Barnett, Julia – sequence: 2 givenname: Kimon surname: Kieslich fullname: Kieslich, Kimon – sequence: 3 givenname: Nicholas surname: Diakopoulos fullname: Diakopoulos, Nicholas |
BackLink | https://doi.org/10.48550/arXiv.2405.09679$$DView paper in arXiv |
BookMark | eNotkMFOwzAQRH2AAxQ-gBP7AwlO6iQON1RCqVRERStxjLb2urWUxJGbRpSvbxo47WF23u7MLbtqXEOMPUQ8FDJJ-BP6H9uHseBJyPM0y2_Y79rWxwo72-xg5SqrTrCoW1Td4RleqafKtRcJYU4N-WGvJ1gratBbB9_ejsYP6vZOQ-eg6LE6YkfQ7QlW5BUNBg2FMTQgwRn4ot14zzV37NpgdaD7_zlhm7diM3sPlp_zxexlGeDwYSBSrnMeq0jjFnWshVEm0VmKOWJmhJRRilIIKRKdRrEUOsljzCOuhYzU1qTTCXv8w47hy9bbGv2pvJRQjiVMz_TyWug |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2405.09679 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2405_09679 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a679-460d902c1dabad2d4fcf5d76a9aa7f48816a844845d61284d592a910d481cbf63 |
IEDL.DBID | GOX |
IngestDate | Wed Jul 31 12:20:36 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a679-460d902c1dabad2d4fcf5d76a9aa7f48816a844845d61284d592a910d481cbf63 |
OpenAccessLink | https://arxiv.org/abs/2405.09679 |
ParticipantIDs | arxiv_primary_2405_09679 |
PublicationCentury | 2000 |
PublicationDate | 2024-05-15 |
PublicationDateYYYYMMDD | 2024-05-15 |
PublicationDate_xml | – month: 05 year: 2024 text: 2024-05-15 day: 15 |
PublicationDecade | 2020 |
PublicationYear | 2024 |
Score | 1.9189327 |
SecondaryResourceType | preprint |
Snippet | The rapid advancement of AI technologies yields numerous future impacts on
individuals and society. Policymakers are tasked to react quickly and establish... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
Title | Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation |
URI | https://arxiv.org/abs/2405.09679 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NSwMxEA1tT15EUamfzMHr6m42ySbeRFurUBVbsbeS3STSg7vSVhF_vZOkfly85gPCC8x7QzJvCDk2Ga-UzWSSe58hJlKXSAyViVGFooVjuWa-wHl4KwaP7GbCJy0C37Uwev4xe4_-wOXiFOmGn6DILlSbtCn1X7au7ibxcTJYca3W_65DjRmG_pBEf4Osr9QdnMfr2CQtW2-Rz9HsJTTJqp8h2vDCdahNXJzB5U_NEmiIFtA-_sCosjVmsQ08edMhnB2GTs-wbKAX_bktoHSDe_8vBTcYiDbEC2gcPMT-8oj4Nhn3e-OLQbJqeZBoPLzHyqiUVpnRpTbUMFc5bgqhldYInJSZ0BITKsaN8MRiuKIaCd8wmVWlE_kO6dRNbbsEjMXEzaEcQo5nQpY6zRmKCy1kKhxX1S7pBqCmr9HVYuoxnAYM9_6f2idrFFndP59n_IB0lvM3e4isvCyPwtV8ATcPjXI |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Simulating+Policy+Impacts%3A+Developing+a+Generative+Scenario+Writing+Method+to+Evaluate+the+Perceived+Effects+of+Regulation&rft.au=Barnett%2C+Julia&rft.au=Kieslich%2C+Kimon&rft.au=Diakopoulos%2C+Nicholas&rft.date=2024-05-15&rft_id=info:doi/10.48550%2Farxiv.2405.09679&rft.externalDocID=2405_09679 |