Developing Purely Data-Driven Multi-Mode Process Controllers Using Inverse Reinforcement Learning
In recent years, process control researchers have been paying close attention to Deep Reinforcement Learning (DRL). DRL offers the potential for model-free controller design, but it is challenging to achieve satisfactory outcomes without accurate simulation models and well-designed reward functions,...
Saved in:
Published in | Computer Aided Chemical Engineering Vol. 53; pp. 2731 - 2736 |
---|---|
Main Authors | , , , , |
Format | Book Chapter |
Language | English |
Published |
2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In recent years, process control researchers have been paying close attention to Deep Reinforcement Learning (DRL). DRL offers the potential for model-free controller design, but it is challenging to achieve satisfactory outcomes without accurate simulation models and well-designed reward functions, particularly in multi-mode processes. To address this issue, this paper presents a novel approach that combines inverse RL (IRL) and multi-task learning to provide a purely data-driven solution for multi-mode control design, allowing for transfer learning and adaptation in different operating modes. The effectiveness of this novel approach is demonstrated through a CSTR continuous control case using multi-mode historical closed-loop data. The proposed method offers a promising solution to the challenges of designing controllers for multi-mode processes. |
---|---|
AbstractList | In recent years, process control researchers have been paying close attention to Deep Reinforcement Learning (DRL). DRL offers the potential for model-free controller design, but it is challenging to achieve satisfactory outcomes without accurate simulation models and well-designed reward functions, particularly in multi-mode processes. To address this issue, this paper presents a novel approach that combines inverse RL (IRL) and multi-task learning to provide a purely data-driven solution for multi-mode control design, allowing for transfer learning and adaptation in different operating modes. The effectiveness of this novel approach is demonstrated through a CSTR continuous control case using multi-mode historical closed-loop data. The proposed method offers a promising solution to the challenges of designing controllers for multi-mode processes. |
Author | Lin, Runze Xie, Lei Chen, Junghui Su, Hongye Huang, Biao |
Author_xml | – sequence: 1 givenname: Runze surname: Lin fullname: Lin, Runze organization: State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China – sequence: 2 givenname: Junghui surname: Chen fullname: Chen, Junghui email: jason@wavenet.cycu.edu.tw organization: Department of Chemical Engineering, Chung-Yuan Christian University, Taoyuan 32023, Taiwan, R.O.C – sequence: 3 givenname: Biao surname: Huang fullname: Huang, Biao email: biao.huang@ualberta.ca organization: Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 2G6, Canada – sequence: 4 givenname: Lei surname: Xie fullname: Xie, Lei organization: State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China – sequence: 5 givenname: Hongye surname: Su fullname: Su, Hongye organization: State Key Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China |
BookMark | eNotkNtKAzEURQNWsGr_Ifiemkxuk0fteCm0WMQ-hzRzRgJjIsl0wL93Wn06bBZ7H1jXaBZTBITuGF0yytT9o9E1oUQITqq6rgRhS0mFVERfoMXE6ETOgM3QnElNiTZCXaFFKeFAmVHM1IbOkWtghD59h_iJd8cM_Q9u3OBIk8MIEW-P_RDINrWAdzl5KAWvUhxy6nvIBe_LqbeO4xQAv0OIXcoeviAOeAMuxwnfosvO9QUW__cG7Z-fPlavZPP2sl49bAgwzgbijayEkLU-dIpx3nVedJU0rRQcqDZSV7RSoFgrtaOiFZ7p1tVaOalFpWrOb1DztwvTkzFAtsUHiB7akMEPtk3BMmpP9uzJnqV2cmTPkiyzZ3tW818jtWV6 |
ContentType | Book Chapter |
Copyright | 2024 Elsevier B.V. |
Copyright_xml | – notice: 2024 Elsevier B.V. |
DOI | 10.1016/B978-0-443-28824-1.50456-7 |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
EndPage | 2736 |
ExternalDocumentID | B9780443288241504567 |
GroupedDBID | AABBV ALMA_UNASSIGNED_HOLDINGS BBABE |
ID | FETCH-LOGICAL-e131t-c95244587bf6133ffc4f259d543e079572026e61d57a04d4c17da876a57426833 |
ISBN | 9780443288241 0443288240 |
ISSN | 1570-7946 |
IngestDate | Sat Jan 25 15:59:36 EST 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | data-driven controller design inverse reinforcement learning multi-mode process control multi-task reinforcement learning |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-e131t-c95244587bf6133ffc4f259d543e079572026e61d57a04d4c17da876a57426833 |
PageCount | 6 |
ParticipantIDs | elsevier_sciencedirect_doi_10_1016_B978_0_443_28824_1_50456_7 |
PublicationCentury | 2000 |
PublicationDate | 2024 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – year: 2024 text: 2024 |
PublicationDecade | 2020 |
PublicationTitle | Computer Aided Chemical Engineering |
PublicationYear | 2024 |
References | Lin, Chen, Xie, Su (bb0015) 2021 Levine (bb0010) 2018; arXiv J. Fu, K. Luo and S. Levine, 2017. Learning Robust Rewards with Adversarial Inverse Reinforcement Learning. arXiv:1710.11248 DOI: 10.48550/arXiv.1710.11248. Yu, Yu, Finn, Ermon (bb0040) 2019 Lin, Luo, Wu, Chen, Huang, Su, Xie (bb0025) 2024; 356 Lin, Chen, Xie, Su (bb0020) 2023; 158 Nian, Liu, Huang (bb0030) 2020; 139 Shin, Badgwell, Liu, Lee (bb0035) 2019; 127 Ziebart, Maas, Bagnell, Dey (bb0045) 2008 |
References_xml | – reference: J. Fu, K. Luo and S. Levine, 2017. Learning Robust Rewards with Adversarial Inverse Reinforcement Learning. arXiv:1710.11248 DOI: 10.48550/arXiv.1710.11248. – volume: arXiv start-page: 1805 year: 2018 ident: bb0010 article-title: Reinforcement Learning and Control as Probabilistic Inference publication-title: Tutorial and Review. – volume: 356 year: 2024 ident: bb0025 article-title: Surrogate empowered Sim2Real transfer of deep reinforcement learning for ORC superheat control publication-title: Applied Energy – year: 2021 ident: bb0015 article-title: Accelerating Reinforcement Learning with Local Data Enhancement for Process Control. 2021 China Automation Congress (CAC) – volume: 139 year: 2020 ident: bb0030 article-title: A review On reinforcement learning: Introduction and applications in industrial process control publication-title: Computers & Chemical Engineering – volume: 127 start-page: 282 year: 2019 end-page: 294 ident: bb0035 article-title: Reinforcement Learning - Overview of recent progress and implications for process control publication-title: Computers & Chemical Engineering – year: 2019 ident: bb0040 article-title: Meta-inverse reinforcement learning with probabilistic context variables. Advances in Neural Information Processing Systems (NeurIPS 2019) – year: 2008 ident: bb0045 article-title: Maximum Entropy Inverse Reinforcement Learning publication-title: Proceedings of the 23rd national conference on Artificial intelligence (AAAI) – volume: 158 start-page: 197 year: 2023 end-page: 215 ident: bb0020 article-title: Accelerating reinforcement learning with case-based model-assisted experience augmentation for process control publication-title: Neural Networks |
SSID | ssib019619890 ssib055836272 ssib056837921 ssib056258536 ssib045323371 |
Score | 2.1428175 |
Snippet | In recent years, process control researchers have been paying close attention to Deep Reinforcement Learning (DRL). DRL offers the potential for model-free... |
SourceID | elsevier |
SourceType | Publisher |
StartPage | 2731 |
SubjectTerms | data-driven controller design inverse reinforcement learning multi-mode process control multi-task reinforcement learning |
Title | Developing Purely Data-Driven Multi-Mode Process Controllers Using Inverse Reinforcement Learning |
URI | https://dx.doi.org/10.1016/B978-0-443-28824-1.50456-7 |
Volume | 53 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwELXockFcQID4KvKBW-USr-04OfTQbUFVJRAfrbS3yLGdshJKpSp7oL-eGcdOvKgcgEu0iXY3iedp_Dya90zI265upTbKYA1gyWRrDKsNt8yUldHOoWMLaoc_firPLuX5Wq3nmm5Qlwztob29U1fyL1GFaxBXVMn-RWSnP4UL8BniC0eIMBx_I7-7ZdbRVyDux3BwvHFYpk3K_8xhcOq2GY0Cvm772ymOJ5Mso7_6vt3M4Y0F5NXGXKeL67gds9_kCDud9Vaf0Zb5J0BoMOz0BhPoQVD2MtxqLYkRUF2IbfE_UDOcWhWwKwS3jggGrjbUKpPn61VekVjKrCKRFqaFlGIJ5F3yPLfqgqGffZ58lcizp44zgk-n5Z1Zfiw4rEZzYLgTC7di_FAhQWV6ntumjsPVzkOF7-k9sqcrtSD3j8-_rL-lLAQpCfvIpiwmlViKzDRQqQom_VltjCtI4DzlfA5r_noU_KU3DlYI8eZFsn1KD5OscHn57s-vlHGljP9cPCIPURNDUawCr_uY3PP9E2JmANARADQDAJ0BQCMAaAYAGgBAIwDoDgBoAsBTcvnh_cXJGYs7djDPBR-YrRXQRVXptgOaKLrOyg7W105J4QtdKw1wKX3JndKmkE5arp2B-dgoDUyxEuIZWfTXvX9OqCudc5A76trAEtnyqrXa-aquRFEb6_wLcpQGpIlkcSSBDaCkSb2LOKBN0cCANmFAG96EAW30y__8_SvyYMb-a7IYbrZ-H_jp0L6JgPoF4cZ-KA |
linkProvider | Elsevier |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=bookitem&rft.title=Computer+Aided+Chemical+Engineering&rft.au=Lin%2C+Runze&rft.au=Chen%2C+Junghui&rft.au=Huang%2C+Biao&rft.au=Xie%2C+Lei&rft.atitle=Developing+Purely+Data-Driven+Multi-Mode+Process+Controllers+Using+Inverse+Reinforcement+Learning&rft.date=2024-01-01&rft.isbn=9780443288241&rft.issn=1570-7946&rft.volume=53&rft.spage=2731&rft.epage=2736&rft_id=info:doi/10.1016%2FB978-0-443-28824-1.50456-7&rft.externalDocID=B9780443288241504567 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1570-7946&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1570-7946&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1570-7946&client=summon |