Beyond the Policy Gradient Theorem for Efficient Policy Updates in Actor-Critic Algorithms
In Reinforcement Learning, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and the policy optimization process must be efficient at unlearning what it previously learnt. In this paper, we discover that...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
15.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In Reinforcement Learning, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and the policy optimization process must be efficient at unlearning what it previously learnt. In this paper, we discover that the policy gradient theorem prescribes policy updates that are slow to unlearn because of their structural symmetry with respect to the value target. To increase the unlearning speed, we study a novel policy update: the gradient of the cross-entropy loss with respect to the action maximizing \(q\), but find that such updates may lead to a decrease in value. Consequently, we introduce a modified policy update devoid of that flaw, and prove its guarantees of convergence to global optimality in \(\mathcal{O}(t^{-1})\) under classic assumptions. Further, we assess standard policy updates and our cross-entropy policy updates along six analytical dimensions. Finally, we empirically validate our theoretical findings. |
---|---|
AbstractList | In Reinforcement Learning, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets evolve with time and the policy optimization process must be efficient at unlearning what it previously learnt. In this paper, we discover that the policy gradient theorem prescribes policy updates that are slow to unlearn because of their structural symmetry with respect to the value target. To increase the unlearning speed, we study a novel policy update: the gradient of the cross-entropy loss with respect to the action maximizing \(q\), but find that such updates may lead to a decrease in value. Consequently, we introduce a modified policy update devoid of that flaw, and prove its guarantees of convergence to global optimality in \(\mathcal{O}(t^{-1})\) under classic assumptions. Further, we assess standard policy updates and our cross-entropy policy updates along six analytical dimensions. Finally, we empirically validate our theoretical findings. |
Author | Laroche, Romain Tachet, Remi |
Author_xml | – sequence: 1 givenname: Romain surname: Laroche fullname: Laroche, Romain – sequence: 2 givenname: Remi surname: Tachet fullname: Tachet, Remi |
BookMark | eNqNirEOgjAURRujiaj8w0ucSbBV1BEJ6uigi4sh5VVKsE_bMvD3EsMHOJ2bc8-MjQ0ZHLGAC7GKdmvOpyx0ro7jmCdbvtmIgN0P2JEpwVcIF2q07OBki1Kj8XCtkCy-QJGFXCktf3aobu-y8OhAG0ilJxtlVnstIW2e1K_q5RZsoorGYThwzpbH_Jqdo7elT4vOP2pqremvB0_4fpUIvt6J_6ovXBFD9g |
ContentType | Paper |
Copyright | 2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials AUTh Library subscriptions: ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection ProQuest Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_26291632483 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 18:32:31 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_26291632483 |
OpenAccessLink | https://www.proquest.com/docview/2629163248?pq-origsite=%requestingapplication% |
PQID | 2629163248 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_2629163248 |
PublicationCentury | 2000 |
PublicationDate | 20220215 |
PublicationDateYYYYMMDD | 2022-02-15 |
PublicationDate_xml | – month: 02 year: 2022 text: 20220215 day: 15 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2022 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.3873508 |
SecondaryResourceType | preprint |
Snippet | In Reinforcement Learning, the optimal action at a given state is dependent on policy decisions at subsequent states. As a consequence, the learning targets... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Algorithms Entropy (Information theory) Machine learning Optimization Theorems |
Title | Beyond the Policy Gradient Theorem for Efficient Policy Updates in Actor-Critic Algorithms |
URI | https://www.proquest.com/docview/2629163248 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dS8MwED90RfDNT_yYI6CvRZukTX2SKe2GsDHEwfBltGmigttqW1_9272kmT4Ie8wHIQnJ3e93udwBXEnKcxFKvN9KI0HRAffzggV-IFmQSYQc2gZ7Ho2j4ZQ_zsKZM7jVzq1yLROtoC5W0tjIr2lEEcmg-o_vyk_fZI0yr6suhcY2eAEVwpzqOB382lhoJBAxs39i1uqOdA-8SVaqah-21PIAdqzLpawP4aX9PEIQgZE2Oi8ZVNYBqyH2x7xaEESUJLFBHkyt6zUtDUuvyfuS9I3J3W_TFZD-xytOuHlb1EdwmSbPD0N_PaG5OzL1_G-B7Bg6yP3VCRCaM8G1ym-10vymELFmUSw1k8gckEllp9DdNNLZ5uZz2KXGm9_kNwm70GmqL3WBOrbJe3Yje-DdJ-PJE5ZG38kPuTSHdw |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3NS8MwFH_ohujNT3RODei1aJP0w5MMWVd1Gx42GF5KmyZOcLNr6__vS5rpQdg1CSEJyXu_38v7ALgRlGeBJ_B9S4UERbncyXLmOq5gbioQciiT7Hk09uMpf555M2twq6xb5VomGkGdfwltI7-lPkUkg-o_fChWjq4apX9XbQmNbWhzhopGR4pHg18bC_UDRMzsn5g1uiPah_ZrWsjyALbk8hB2jMulqI7grQkeIYjASJOdlwxK44BVExMxLxcEESXpmyQPutWOmhaapVfkY0l62uTuNOUKSO_zHRdczxfVMVxH_clj7KwXlNgrUyV_G2Qn0ELuL0-B0IwFXMnsXknF7_IgVMwPhWICmQMyqfQMuptm6mzuvoLdeDIaJsOn8cs57FHt2a9rnXhdaNXlt7xAfVtnl-ZQfwC1JoeO |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+the+Policy+Gradient+Theorem+for+Efficient+Policy+Updates+in+Actor-Critic+Algorithms&rft.jtitle=arXiv.org&rft.au=Laroche%2C+Romain&rft.au=Tachet%2C+Remi&rft.date=2022-02-15&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |