On Avoiding Power-Seeking by Artificial Intelligence
We do not know how to align a very intelligent AI agent's behavior with human interests. I investigate whether -- absent a full solution to this AI alignment problem -- we can build smart AI agents which have limited impact on the world, and which do not autonomously seek power. In this thesis,...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
23.06.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | We do not know how to align a very intelligent AI agent's behavior with human
interests. I investigate whether -- absent a full solution to this AI alignment
problem -- we can build smart AI agents which have limited impact on the world,
and which do not autonomously seek power. In this thesis, I introduce the
attainable utility preservation (AUP) method. I demonstrate that AUP produces
conservative, option-preserving behavior within toy gridworlds and within
complex environments based off of Conway's Game of Life. I formalize the
problem of side effect avoidance, which provides a way to quantify the side
effects an agent had on the world. I also give a formal definition of
power-seeking in the context of AI agents and show that optimal policies tend
to seek power. In particular, most reward functions have optimal policies which
avoid deactivation. This is a problem if we want to deactivate or correct an
intelligent agent after we have deployed it. My theorems suggest that since
most agent goals conflict with ours, the agent would very probably resist
correction. I extend these theorems to show that power-seeking incentives occur
not just for optimal decision-makers, but under a wide range of decision-making
procedures. |
---|---|
AbstractList | We do not know how to align a very intelligent AI agent's behavior with human
interests. I investigate whether -- absent a full solution to this AI alignment
problem -- we can build smart AI agents which have limited impact on the world,
and which do not autonomously seek power. In this thesis, I introduce the
attainable utility preservation (AUP) method. I demonstrate that AUP produces
conservative, option-preserving behavior within toy gridworlds and within
complex environments based off of Conway's Game of Life. I formalize the
problem of side effect avoidance, which provides a way to quantify the side
effects an agent had on the world. I also give a formal definition of
power-seeking in the context of AI agents and show that optimal policies tend
to seek power. In particular, most reward functions have optimal policies which
avoid deactivation. This is a problem if we want to deactivate or correct an
intelligent agent after we have deployed it. My theorems suggest that since
most agent goals conflict with ours, the agent would very probably resist
correction. I extend these theorems to show that power-seeking incentives occur
not just for optimal decision-makers, but under a wide range of decision-making
procedures. |
Author | Turner, Alexander Matt |
Author_xml | – sequence: 1 givenname: Alexander Matt surname: Turner fullname: Turner, Alexander Matt |
BackLink | https://doi.org/10.48550/arXiv.2206.11831$$DView paper in arXiv |
BookMark | eNotzstOwzAQhWEvYAGFB2BFXiDB48vYWUYVl0qVikT30TgZVxbBQaYq9O1RC6ujf3P0XYuLPGcW4g5kY7y18oHKTzo0SklsALyGK2E2ueoOcxpT3lWv8zeX-o35_VThWHVln2IaEk3VKu95mtKO88A34jLS9MW3_7sQ26fH7fKlXm-eV8tuXRM6qN0wSm_QgnJe26gQTYyMzkJrCCmGCK2LEFA6a8npMSrwA7Wag5EqoF6I-7_bM7v_LOmDyrE_8fszX_8CaThAHA |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2206.11831 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2206_11831 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a671-7cd08465127835f2664ffe675194a6afbf197f1b60755a73df218ca93eb402b63 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:49:52 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a671-7cd08465127835f2664ffe675194a6afbf197f1b60755a73df218ca93eb402b63 |
OpenAccessLink | https://arxiv.org/abs/2206.11831 |
ParticipantIDs | arxiv_primary_2206_11831 |
PublicationCentury | 2000 |
PublicationDate | 2022-06-23 |
PublicationDateYYYYMMDD | 2022-06-23 |
PublicationDate_xml | – month: 06 year: 2022 text: 2022-06-23 day: 23 |
PublicationDecade | 2020 |
PublicationYear | 2022 |
Score | 1.8519782 |
SecondaryResourceType | preprint |
Snippet | We do not know how to align a very intelligent AI agent's behavior with human
interests. I investigate whether -- absent a full solution to this AI alignment... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence |
Title | On Avoiding Power-Seeking by Artificial Intelligence |
URI | https://arxiv.org/abs/2206.11831 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NSwMxEB1qT15EUamf5OA1uPnYZHMsYq2CVrDC3pbMbgK9VKm16L93srtiL16TucwM4b1JMm8ArggDTV5HxyNSuaqFqDm6QnHlc2FrR4w7pEbhxyczfdUPZV4OgP32wvjV12LT6QPjx7WUmaFDXaRG6R0p05etu1nZPU62Uly9_Z8dccx2aQskJvuw17M7Nu7ScQCDsDwEPVuy8eZtkVCCPaepZPwlhHRFzfC7te1UHNj9ljzmEcwnt_ObKe-HFXBvrOC2brIizRVPkyvySLCnYwzExoXT3viIUTgbBRqC6Nxb1UTC1to7FZAqODTqGIZU74cRME8kzmARrcOgMSnWkV8eTZNRKBtrT2DUuli9d3oUVfK-ar0__X_rDHZl-rlPGZHqHIbr1We4IDxd42Ub1B9ulXM6 |
link.rule.ids | 228,230,786,891 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=On+Avoiding+Power-Seeking+by+Artificial+Intelligence&rft.au=Turner%2C+Alexander+Matt&rft.date=2022-06-23&rft_id=info:doi/10.48550%2Farxiv.2206.11831&rft.externalDocID=2206_11831 |