A Reinforcement Learning Approach to Jointly Adapt Vehicular Communications and Planning for Optimized Driving
Our premise is that autonomous vehicles must optimize communications and motion planning jointly. Specifically, a vehicle must adapt its motion plan staying cognizant of communications rate related constraints and adapt the use of communications while being cognizant of motion planning related restr...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
10.07.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Our premise is that autonomous vehicles must optimize communications and motion planning jointly. Specifically, a vehicle must adapt its motion plan staying cognizant of communications rate related constraints and adapt the use of communications while being cognizant of motion planning related restrictions that may be imposed by the on-road environment. To this end, we formulate a reinforcement learning problem wherein an autonomous vehicle jointly chooses (a) a motion planning action that executes on-road and (b) a communications action of querying sensed information from the infrastructure. The goal is to optimize the driving utility of the autonomous vehicle. We apply the Q-learning algorithm to make the vehicle learn the optimal policy, which makes the optimal choice of planning and communications actions at any given time. We demonstrate the ability of the optimal policy to smartly adapt communications and planning actions, while achieving large driving utilities, using simulations. |
---|---|
AbstractList | Our premise is that autonomous vehicles must optimize communications and motion planning jointly. Specifically, a vehicle must adapt its motion plan staying cognizant of communications rate related constraints and adapt the use of communications while being cognizant of motion planning related restrictions that may be imposed by the on-road environment. To this end, we formulate a reinforcement learning problem wherein an autonomous vehicle jointly chooses (a) a motion planning action that executes on-road and (b) a communications action of querying sensed information from the infrastructure. The goal is to optimize the driving utility of the autonomous vehicle. We apply the Q-learning algorithm to make the vehicle learn the optimal policy, which makes the optimal choice of planning and communications actions at any given time. We demonstrate the ability of the optimal policy to smartly adapt communications and planning actions, while achieving large driving utilities, using simulations. |
Author | Kaul, Sanjit K Bhati, Rupali Sujit, P B Pal, Mayank K Saket Anand Sharma, Anil |
Author_xml | – sequence: 1 givenname: Mayank surname: Pal middlename: K fullname: Pal, Mayank K – sequence: 2 givenname: Rupali surname: Bhati fullname: Bhati, Rupali – sequence: 3 givenname: Anil surname: Sharma fullname: Sharma, Anil – sequence: 4 givenname: Sanjit surname: Kaul middlename: K fullname: Kaul, Sanjit K – sequence: 5 fullname: Saket Anand – sequence: 6 givenname: P surname: Sujit middlename: B fullname: Sujit, P B |
BookMark | eNqNjc0KwjAQhIMoWH_eYcGzUBNrvZaqiAiKiFcJ7aor7aYmqaBPbxEfwNPAzDczPdFmw9gSgVRqMp5PpeyKoXP3MAzlLJZRpALBCRyQ-GJshiWyhy1qy8RXSKrKGp3dwBvYGGJfvCDJdeXhhDfK6kJbSE1Z1kyZ9mTYgeYc9oXmb7-ZhF3lqaQ35rCw9GzcgehcdOFw-NO-GK2Wx3Q9br4eNTp_vpvachOdZRgrFcWxnKv_qA_P-kv0 |
ContentType | Paper |
Copyright | 2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection ProQuest Engineering Database ProQuest - Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_20733577283 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 17:27:35 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_20733577283 |
OpenAccessLink | https://www.proquest.com/docview/2073357728?pq-origsite=%requestingapplication% |
PQID | 2073357728 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_2073357728 |
PublicationCentury | 2000 |
PublicationDate | 20180710 |
PublicationDateYYYYMMDD | 2018-07-10 |
PublicationDate_xml | – month: 07 year: 2018 text: 20180710 day: 10 |
PublicationDecade | 2010 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2018 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.1499305 |
SecondaryResourceType | preprint |
Snippet | Our premise is that autonomous vehicles must optimize communications and motion planning jointly. Specifically, a vehicle must adapt its motion plan staying... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Algorithms Autonomous vehicles Computer simulation Machine learning Motion planning Optimization Planning Utilities |
Title | A Reinforcement Learning Approach to Jointly Adapt Vehicular Communications and Planning for Optimized Driving |
URI | https://www.proquest.com/docview/2073357728 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1bS8MwFD7oiuCbV7zMcUBfC0vTrumTVG0dg80xVPY2ekm1oGlt64M--NtNQquIsMcQOCThXL-cC8CFQ5NhSmlkejyTAYrtETNWnTCzzMpYSijjusJ7OhuNH-zJ0lm2gFvdplV2OlEr6rRIFEaukBBKHekLssvyzVRTo9TvajtCYxMMYrmu4moW3v5gLNbIlR4z_admte0Id8CYRyWvdmGDiz3Y0imXSb0PwscF131LEw3RYdvq9An9ts83NgVOilw0Lx_op1HZ4CN_znXiKP4p7KgxEil284dQksQ7qQpe80-e4k2VK9DgAM7D4P56bHaHXLVsVK9-L00PoScKwY8ApQR5UhLtjNmOTXTANHJi11OeE4mZcwz9dZRO1m-fwrb0CZiCL8mwD72meudn0u428UA_7gCMq2A2X8jV9Cv4Bo7tjbA |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dS8MwED90Q_TNT_yYeqCvhXVpuvRJilrr3KbIlL2VtklnQdva1gf9601Cq4iw58CRhNzXL3e_AzinJO5zQkLDEYlMUCzHNCLFhJkkg4RxkzChO7wnU9t_skZzOm8At6opq2xtojbUPI8VRq6QEEKojAXZRfFuqKlR6ne1GaGxCl2LSEejOsW9mx-MZWAPZcRM_plZ7Tu8Teg-hIUot2BFZNuwpksu42oHMhcfheYtjTVEhw3V6QLdhucb6xxHeZrVr5_o8rCo8Vm8pLpwFP80dlQYZhzb-UMoReK9NAVv6ZfgeFWmCjTYhTPvenbpG-0mg-YZVcHvockedLI8E_uAUoMcqYlWwixqmTphsmk0dFTkZEaMHkBvmaTD5cunsO7PJuNgfDu9O4INGR8wBWWa_R506vJDHEsfXEcn-qK_AX2Rjcc |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Reinforcement+Learning+Approach+to+Jointly+Adapt+Vehicular+Communications+and+Planning+for+Optimized+Driving&rft.jtitle=arXiv.org&rft.au=Pal%2C+Mayank+K&rft.au=Bhati%2C+Rupali&rft.au=Sharma%2C+Anil&rft.au=Kaul%2C+Sanjit+K&rft.date=2018-07-10&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |