Implications for Governance in Public Perceptions of Societal-scale AI Risks
Amid growing concerns over AI's societal risks--ranging from civilizational collapse to misinformation and systemic bias--this study explores the perceptions of AI experts and the general US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferenc...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Amid growing concerns over AI's societal risks--ranging from civilizational
collapse to misinformation and systemic bias--this study explores the
perceptions of AI experts and the general US registered voters on the
likelihood and impact of 18 specific AI risks, alongside their policy
preferences for managing these risks. While both groups favor international
oversight over national or corporate governance, our survey reveals a
discrepancy: voters perceive AI risks as both more likely and more impactful
than experts, and also advocate for slower AI development. Specifically, our
findings indicate that policy interventions may best assuage collective
concerns if they attempt to more carefully balance mitigation efforts across
all classes of societal-scale risks, effectively nullifying the
near-vs-long-term debate over AI risks. More broadly, our results will serve
not only to enable more substantive policy discussions for preventing and
mitigating AI risks, but also to underscore the challenge of consensus building
for effective policy implementation. |
---|---|
DOI: | 10.48550/arxiv.2406.06199 |