Trajectory Design for UAV-Based Internet of Things Data Collection: A Deep Reinforcement Learning Approach
In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV's trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplif...
Saved in:
Published in | IEEE internet of things journal Vol. 9; no. 5; pp. 3899 - 3912 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV's trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplified 2-D scenario and the availability of perfect channel state information (CSI), this article considers a practical 3-D urban environment with imperfect CSI, where the UAV's trajectory is designed to minimize data collection completion time subject to practical throughput and flight movement constraints. Specifically, inspired by the state-of-the-art deep reinforcement learning approaches, we leverage the twin-delayed deep deterministic policy gradient (TD3) to design the UAV's trajectory and we present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm. In particular, we set an additional information, i.e., the merged pheromone, to represent the state information of the UAV and environment as a reference of reward which facilitates the algorithm design. By taking the service statuses of the IoT nodes, the UAV's position, and the merged pheromone as input, the proposed algorithm can continuously and adaptively learn how to adjust the UAV's movement strategy. By interacting with the external environment in the corresponding Markov decision process, the proposed algorithm can achieve a near-optimal navigation strategy. Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional nonlearning-based baseline methods. |
---|---|
AbstractList | In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV’s trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplified 2-D scenario and the availability of perfect channel state information (CSI), this article considers a practical 3-D urban environment with imperfect CSI, where the UAV’s trajectory is designed to minimize data collection completion time subject to practical throughput and flight movement constraints. Specifically, inspired by the state-of-the-art deep reinforcement learning approaches, we leverage the twin-delayed deep deterministic policy gradient (TD3) to design the UAV’s trajectory and we present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm. In particular, we set an additional information, i.e., the merged pheromone, to represent the state information of the UAV and environment as a reference of reward which facilitates the algorithm design. By taking the service statuses of the IoT nodes, the UAV’s position, and the merged pheromone as input, the proposed algorithm can continuously and adaptively learn how to adjust the UAV’s movement strategy. By interacting with the external environment in the corresponding Markov decision process, the proposed algorithm can achieve a near-optimal navigation strategy. Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional nonlearning-based baseline methods. In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV’s trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplified 2-D scenario and the availability of perfect channel state information (CSI), this article considers a practical 3-D urban environment with imperfect CSI, where the UAV’s trajectory is designed to minimize data collection completion time subject to practical throughput and flight movement constraints. Specifically, inspired by the state-of-the-art deep reinforcement learning approaches, we leverage the twin-delayed deep deterministic policy gradient (TD3) to design the UAV’s trajectory and we present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm. In particular, we set an additional information, i.e., the merged pheromone, to represent the state information of the UAV and environment as a reference of reward which facilitates the algorithm design. By taking the service statuses of the IoT nodes, the UAV’s position, and the merged pheromone as input, the proposed algorithm can continuously and adaptively learn how to adjust the UAV’s movement strategy. By interacting with the external environment in the corresponding Markov decision process, the proposed algorithm can achieve a near-optimal navigation strategy. Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional nonlearning-based baseline methods |
Author | Wang, Yang Gao, Zhen Zheng, Dezhi Gao, Yue Renzo, Marco Di Cao, Xianbin Ng, Derrick Wing Kwan Zhang, Jun |
Author_xml | – sequence: 1 givenname: Yang orcidid: 0000-0001-6713-030X surname: Wang fullname: Wang, Yang email: 3120200843@bit.edu.cn organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China – sequence: 2 givenname: Zhen orcidid: 0000-0002-2709-0216 surname: Gao fullname: Gao, Zhen email: gaozhen16@bit.edu.cn organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China – sequence: 3 givenname: Jun orcidid: 0000-0003-1017-7179 surname: Zhang fullname: Zhang, Jun email: buaazhangjun@vip.sina.com organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China – sequence: 4 givenname: Xianbin orcidid: 0000-0002-5042-7884 surname: Cao fullname: Cao, Xianbin email: xbcao@buaa.edu.cn organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 5 givenname: Dezhi orcidid: 0000-0003-3998-5989 surname: Zheng fullname: Zheng, Dezhi email: zhengdezhi@buaa.edu.cn organization: Innovation Institute of Frontier Science and Technology, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, China – sequence: 6 givenname: Yue orcidid: 0000-0001-6502-9910 surname: Gao fullname: Gao, Yue email: yue.gao@ieee.org organization: Department of Electrical and Electronic Engineering, University of Surrey, Surrey, U.K – sequence: 7 givenname: Derrick Wing Kwan orcidid: 0000-0001-6400-712X surname: Ng fullname: Ng, Derrick Wing Kwan email: w.k.ng@unsw.edu.au organization: School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW, Australia – sequence: 8 givenname: Marco Di orcidid: 0000-0003-0772-8793 surname: Renzo fullname: Renzo, Marco Di email: marco.di.renzo@gmail.com organization: Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des Signaux et Systèmes, Gif-sur-Yvette, France |
BackLink | https://hal.science/hal-03837851$$DView record in HAL |
BookMark | eNp9kV1rFDEUhoNUsNb-APEm4JUXs-Zjkpnxbtyq3bJQKNvehjOZs90s02RNUqH_3gxbRbzoTRLC8xzew_uWnPjgkZD3nC04Z93nq9X1ZiGY4AvJy9mqV-RUSNFUtdbi5J_3G3Ke0p4xVjTFO31K9psIe7Q5xCd6gcnde7oNkd72d9VXSDjSlc8YPWYatnSzc_4-0QvIQJdhmorngv9C-6Ligd6g80W2-IA-0zVC9IWn_eEQA9jdO_J6C1PC8-f7jNx-_7ZZXlbr6x-rZb-urGxYrqy1chANG5iCttFY19AKNqJW3djysatHUIMEK4VQbISxE7UcaotsEHxoBi7PyKfj3B1M5hDdA8QnE8CZy35t5j8mW9m0iv-a2Y9HtkT8-Ygpm314jL7EM0JL3qmOaV2o5kjZGFKKuDXWZZh3zxHcZDgzcw9m7sHMPZjnHorJ_zP_BHrJ-XB0HCL-5TvFas2E_A3u15ON |
CODEN | IITJAU |
CitedBy_id | crossref_primary_10_1109_LCOMM_2023_3252725 crossref_primary_10_1109_TMC_2022_3172444 crossref_primary_10_3390_electronics13081592 crossref_primary_10_1109_TIE_2022_3220893 crossref_primary_10_3390_electronics12061376 crossref_primary_10_1109_TVT_2024_3419842 crossref_primary_10_1109_TMC_2024_3419915 crossref_primary_10_1038_s41598_024_72654_y crossref_primary_10_1177_17543371241229521 crossref_primary_10_1016_j_iot_2024_101262 crossref_primary_10_1016_j_iot_2024_101184 crossref_primary_10_3390_drones8090485 crossref_primary_10_1109_JIOT_2023_3268316 crossref_primary_10_1002_dac_5421 crossref_primary_10_1109_TWC_2023_3282909 crossref_primary_10_3390_s24082386 crossref_primary_10_1109_JSAC_2024_3459039 crossref_primary_10_1109_TITS_2023_3345280 crossref_primary_10_1109_JSAC_2024_3459035 crossref_primary_10_1109_TVT_2023_3320676 crossref_primary_10_1177_09544100241278023 crossref_primary_10_3390_app132312555 crossref_primary_10_1109_TGCN_2022_3186841 crossref_primary_10_1109_TWC_2024_3458194 crossref_primary_10_3390_drones7070465 crossref_primary_10_1109_LSENS_2024_3487009 crossref_primary_10_1109_LCOMM_2023_3305537 crossref_primary_10_1016_j_aei_2024_102920 crossref_primary_10_3390_math13010046 crossref_primary_10_1049_cje_2021_00_314 crossref_primary_10_1109_TWC_2023_3260304 crossref_primary_10_1049_ell2_70063 crossref_primary_10_1109_JIOT_2024_3395779 crossref_primary_10_3390_drones8090510 crossref_primary_10_3390_electronics12173537 crossref_primary_10_1109_LCOMM_2023_3258922 crossref_primary_10_1109_OJCOMS_2024_3429198 crossref_primary_10_1109_ACCESS_2025_3526193 crossref_primary_10_1016_j_ijcip_2024_100726 crossref_primary_10_1016_j_phycom_2024_102462 crossref_primary_10_3390_drones8010027 crossref_primary_10_3390_en17112762 crossref_primary_10_3390_drones8120717 crossref_primary_10_3390_drones9020121 crossref_primary_10_1016_j_oceaneng_2022_112809 crossref_primary_10_1109_ACCESS_2024_3412392 crossref_primary_10_1109_TMC_2024_3480910 crossref_primary_10_1109_JIOT_2024_3392358 crossref_primary_10_1109_JIOT_2022_3231341 crossref_primary_10_1109_TWC_2023_3244868 crossref_primary_10_1016_j_jnca_2023_103670 crossref_primary_10_1109_JSEN_2023_3265935 crossref_primary_10_1109_JIOT_2022_3201017 crossref_primary_10_1109_LWC_2023_3335037 crossref_primary_10_1109_TNSE_2023_3344428 crossref_primary_10_1109_TWC_2022_3149636 crossref_primary_10_1109_JIOT_2024_3350525 crossref_primary_10_1109_JIOT_2023_3309136 crossref_primary_10_48084_etasr_8981 crossref_primary_10_1016_j_cja_2025_103493 crossref_primary_10_1109_LCOMM_2022_3210660 crossref_primary_10_1016_j_comnet_2023_110074 crossref_primary_10_1142_S021812662350305X crossref_primary_10_3390_s23104691 crossref_primary_10_1109_TWC_2024_3430228 crossref_primary_10_1109_TITS_2022_3219048 crossref_primary_10_1109_TWC_2024_3470525 crossref_primary_10_1109_TWC_2024_3387855 crossref_primary_10_1109_JIOT_2022_3222027 crossref_primary_10_1109_TCOMM_2024_3379417 crossref_primary_10_1109_TWC_2023_3292290 crossref_primary_10_3390_machines11010108 |
Cites_doi | 10.1109/JIOT.2019.2955732 10.1109/JIOT.2020.3012835 10.1109/JSAC.2021.3088681 10.1109/LWC.2017.2776922 10.1109/TCOMM.2020.2982152 10.1109/MCOM.2016.7470933 10.1109/JSAC.2018.2864420 10.1109/TVT.2019.2959808 10.1609/aaai.v30i1.10295 10.1109/JIOT.2019.2943608 10.1109/TWC.2019.2911939 10.1109/INFOCOM.2014.6848000 10.23919/JCIN.2020.9055113 10.1109/JSAC.2019.2904353 10.1038/nature14236 10.1109/JIOT.2018.2875446 10.1017/CBO9780511804441 10.1109/3477.484436 10.1109/INFOCOMWKSHPS50562.2020.9162896 10.1109/TWC.2017.2751045 10.1109/TVT.2019.2913988 10.1109/LWC.2014.2342736 10.1109/GLOBECOM38437.2019.9014041 10.1109/TWC.2020.3016024 10.1109/TCOMM.2019.2900630 10.1109/TII.2017.2783439 10.1109/JIOT.2018.2887086 10.1109/TNN.1998.712192 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 Distributed under a Creative Commons Attribution 4.0 International License |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 – notice: Distributed under a Creative Commons Attribution 4.0 International License |
DBID | 97E RIA RIE AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D 1XC VOOES |
DOI | 10.1109/JIOT.2021.3102185 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Hyper Article en Ligne (HAL) Hyper Article en Ligne (HAL) (Open Access) |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2327-4662 |
EndPage | 3912 |
ExternalDocumentID | oai_HAL_hal_03837851v1 10_1109_JIOT_2021_3102185 9504602 |
Genre | orig-research |
GrantInformation_xml | – fundername: Basic Science Center for Autonomous Intelligent Unmanned Systems grantid: 62088101 – fundername: National Natural Science Foundation of China grantid: 62071044 funderid: 10.13039/501100001809 – fundername: UNSW Digital Grid Futures Institute, UNSW, Sydney, under a Crossdisciplinary Fund Scheme – fundername: Beijing Natural Science Foundation grantid: L182024 funderid: 10.13039/501100004826 – fundername: Science and Technology Innovation Plan from Beijing Institute of Technology – fundername: Australian Research Council’s Discovery Project grantid: DP210102169 funderid: 10.13039/501100000923 |
GroupedDBID | 0R~ 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS IFIPE IPLJI JAVBF M43 OCL PQQKQ RIA RIE AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D 1XC VOOES |
ID | FETCH-LOGICAL-c370t-ccc3b270b05a876e44a820de659d81d94da5b3ac32250dad9243b4ce0b21b7b13 |
IEDL.DBID | RIE |
ISSN | 2327-4662 2372-2541 |
IngestDate | Fri May 09 12:21:20 EDT 2025 Mon Jun 30 03:37:50 EDT 2025 Tue Jul 01 04:08:13 EDT 2025 Thu Apr 24 23:03:25 EDT 2025 Wed Aug 27 02:49:35 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 5 |
Keywords | Internet-of-Things (IoT) trajectory design deep reinforcement learning UAV communications data collection |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c370t-ccc3b270b05a876e44a820de659d81d94da5b3ac32250dad9243b4ce0b21b7b13 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-3998-5989 0000-0003-0772-8793 0000-0001-6502-9910 0000-0001-6400-712X 0000-0002-2709-0216 0000-0002-5042-7884 0000-0003-1017-7179 0000-0001-6713-030X |
OpenAccessLink | https://hal.science/hal-03837851 |
PQID | 2631959066 |
PQPubID | 2040421 |
PageCount | 14 |
ParticipantIDs | hal_primary_oai_HAL_hal_03837851v1 crossref_citationtrail_10_1109_JIOT_2021_3102185 proquest_journals_2631959066 crossref_primary_10_1109_JIOT_2021_3102185 ieee_primary_9504602 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-01 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE internet of things journal |
PublicationTitleAbbrev | JIoT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 Schulman (ref27) 2017 ref34 ref15 ref14 ref30 LaValle (ref35) 1998 ref11 ref33 ref10 ref32 Lillicrap (ref23) 2015; 8 ref2 ref1 ref17 ref16 ref19 ref18 Haarnoja (ref28) 2018 ref24 Lowe (ref25) ref20 ref22 Fujimoto (ref26) ref21 (ref31) 2012 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 |
References_xml | – ident: ref14 doi: 10.1109/JIOT.2019.2955732 – volume: 8 issue: 6 year: 2015 ident: ref23 article-title: Continuous control with deep reinforcement learning publication-title: Comput. Sci. – year: 1998 ident: ref35 article-title: Rapidly-exploring random trees: A new tool for path planning – ident: ref15 doi: 10.1109/JIOT.2020.3012835 – volume-title: Soft actor–critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor year: 2018 ident: ref28 – ident: ref2 doi: 10.1109/JSAC.2021.3088681 – ident: ref5 doi: 10.1109/LWC.2017.2776922 – ident: ref8 doi: 10.1109/TCOMM.2020.2982152 – ident: ref4 doi: 10.1109/MCOM.2016.7470933 – start-page: 6379 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. (NIPS) ident: ref25 article-title: Multi-agent actor–critic for mixed cooperative-competitive environments – ident: ref13 doi: 10.1109/JSAC.2018.2864420 – ident: ref7 doi: 10.1109/TVT.2019.2959808 – ident: ref33 doi: 10.1609/aaai.v30i1.10295 – start-page: 1582 volume-title: Proc. Int. Conf. Mech. Learn. (ICML) ident: ref26 article-title: Addressing function approximation error in actor–critic methods – ident: ref30 doi: 10.1109/JIOT.2019.2943608 – ident: ref10 doi: 10.1109/TWC.2019.2911939 – ident: ref3 doi: 10.1109/INFOCOM.2014.6848000 – ident: ref9 doi: 10.23919/JCIN.2020.9055113 – ident: ref19 doi: 10.1109/JSAC.2019.2904353 – ident: ref24 doi: 10.1038/nature14236 – ident: ref11 doi: 10.1109/JIOT.2018.2875446 – ident: ref32 doi: 10.1017/CBO9780511804441 – volume-title: Proximal policy optimization algorithms year: 2017 ident: ref27 – ident: ref22 doi: 10.1109/3477.484436 – ident: ref21 doi: 10.1109/INFOCOMWKSHPS50562.2020.9162896 – ident: ref6 doi: 10.1109/TWC.2017.2751045 – ident: ref17 doi: 10.1109/TVT.2019.2913988 – ident: ref29 doi: 10.1109/LWC.2014.2342736 – ident: ref20 doi: 10.1109/GLOBECOM38437.2019.9014041 – ident: ref34 doi: 10.1109/TWC.2020.3016024 – ident: ref12 doi: 10.1109/TCOMM.2019.2900630 – ident: ref18 doi: 10.1109/TII.2017.2783439 – ident: ref1 doi: 10.1109/JIOT.2018.2887086 – volume-title: Propagation data and prediction methods required for the design of terrestrial broadband radio access systems operating in a frequency range from 3 to 60 GHz year: 2012 ident: ref31 – ident: ref16 doi: 10.1109/TNN.1998.712192 |
SSID | ssj0001105196 |
Score | 2.5593133 |
Snippet | In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV's... In this article, we investigate an unmanned aerial vehicle (UAV)-assisted Internet of Things (IoT) system in a sophisticated 3-D environment, where the UAV’s... |
SourceID | hal proquest crossref ieee |
SourceType | Open Access Repository Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 3899 |
SubjectTerms | Algorithms Completion time Data collection Deep learning deep reinforcement learning (DRL) Engineering Sciences Internet of Things Internet of Things (IoT) Machine learning Markov processes Minimization Nodes Optimization Resource management Sensors Signal and Image processing Three-dimensional displays Trajectory trajectory design Trajectory optimization unmanned aerial vehicle (UAV) communications Unmanned aerial vehicles Urban environments |
Title | Trajectory Design for UAV-Based Internet of Things Data Collection: A Deep Reinforcement Learning Approach |
URI | https://ieeexplore.ieee.org/document/9504602 https://www.proquest.com/docview/2631959066 https://hal.science/hal-03837851 |
Volume | 9 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fS9xAEB7UJ19q27M0rZWl-FSaczfZJBffolZO8SqIV3wL-0uLLXeiOUH_emc2m5S2Ir6FZBcCs7vzfbMz3wBsaW3RsRQ2RvagYqnzMtYXyFK0kkY650Rmqd558j0fT-XReXa-BF_7Whj87pPP3JAe_V2-nZsFhcq2y4yu8fDAXUbi1tZq_YmnCAIjebi4FLzcPjo8OUMCmAjkpeTJsr9cz_JPSnz0HVX-O4a9bzlYg0n3V21Kya_hotFD8_CPYONLf_s1vAogk1XtqngDS272Fta6Bg4s7OcBXKGnuvJh-3u271M5GGJYNq1-xLvo3Sxr44WuYfML1nb4ZPuqUcyHG3xFxA6rcKq7ZqfOa7AaH25kQbb1klVBs3wdpgffzvbGcWi-EJu04E1sjEl1UnDNM4UnppNSIViwLs9Kixi3lFZlOlWGDgRulUUel2ppHNeJ0IUW6TtYmc1n7j0wBFnIWqy0orRSJIVyuRxlelSOHE-0SSLgnV1qE5TJqUHG79ozFF7WZMqaTFkHU0bwpZ9y3cpyPDf4Mxq7H0eC2uPquKZ3nAg6gs47EcGALNePCkaLYKNbG3XY3bd1kqekyYNo7cPTsz7CakJlEj5XbQNWmpuF-4TgpdGbftU-AjhC6tw |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1bb9MwFD7axgO8bMBAFAZYCF6Q0tmOnTRIewiUqd26IaEW7c34tqGB2omloPFb9lf4bxw7bhEX8TaJtyixI8X57POdO8BTYxwKltJlqD3oTJiiyswxailGCyu890y6kO98cFgMJmLvSB6twOUyFwafx-Az3w2X0ZfvZnYeTGXblQxuPJ5CKPf9xVdU0M53hn38m8843309fjXIUg-BzOYlbTJrbW54SQ2VGje-F0KjzHO-kJVDqlYJp6XJtQ24pk47VEdyI6ynhjNTGpbje1fhGvIMydvssJ8WHBboT5FcpYxW23vDN2NUOTlDTTjITvmLsFv9EEItYw-XPw7-KM12N-D7Yh3aIJaP3XljuvbbbyUi_9eFugnriUaTusX9LVjx09uwsWhRQdKJtQmnKItPo2PigvRjsApBlk4m9bvsJcpvR1qLqG_I7Ji0PUxJXzeaRINKzPl4QWqc6s_IWx-rzNpoUCWpMO0JqVNV9jswuZIvvgtr09nU3wOCNBL1Miccq5xgvNS-ED1pelXPU24s7wBd4EDZVHs9tAD5pKIORisVoKMCdFSCTgeeL6ectYVH_jX4CYJrOS6UDB_UIxXu0WCCQFr9hXVgMyBlOSqBpANbCyyqdH6dK17koeoQ8tH7f5_1GK4PxgcjNRoe7j-AGzwkhcTIvC1Yaz7P_UOkao15FHcMgfdXjbwfHGRIwQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Trajectory+Design+for+UAV-Based+Internet+of+Things+Data+Collection%3A+A+Deep+Reinforcement+Learning+Approach&rft.jtitle=IEEE+internet+of+things+journal&rft.au=Wang%2C+Yang&rft.au=Gao%2C+Zhen&rft.au=Zhang%2C+Jun&rft.au=Cao%2C+Xianbin&rft.date=2022-03-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.eissn=2327-4662&rft.volume=9&rft.issue=5&rft.spage=3899&rft_id=info:doi/10.1109%2FJIOT.2021.3102185&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2327-4662&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2327-4662&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2327-4662&client=summon |