A smart agriculture IoT system based on deep reinforcement learning

Smart agriculture systems based on Internet of Things are the most promising to increase food production and reduce the consumption of resources like fresh water. In this study, we present a smart agriculture IoT system based on deep reinforcement learning which includes four layers, namely agricult...

Full description

Saved in:
Bibliographic Details
Published inFuture generation computer systems Vol. 99; pp. 500 - 507
Main Authors Bu, Fanyu, Wang, Xin
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.10.2019
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Smart agriculture systems based on Internet of Things are the most promising to increase food production and reduce the consumption of resources like fresh water. In this study, we present a smart agriculture IoT system based on deep reinforcement learning which includes four layers, namely agricultural data collection layer, edge computing layer, agricultural data transmission layer, and cloud computing layer. The presented system integrates some advanced information techniques, especially artificial intelligence and cloud computing, with agricultural production to increase food production. Specially, the most advanced artificial intelligence model, deep reinforcement learning is combined in the cloud layer to make immediate smart decisions such as determining the amount of water needed to be irrigated for improving crop growth environment. We present several representative deep reinforcement learning models with their broad applications. Finally, we talk about the open challenges and the potential applications of deep reinforcement learning in smart agriculture IoT systems. •We design a smart agriculture IoT system based on an edge-cloud computing.•We present several representative deep reinforcement learning models.•We discuss the possible challenges and applications of deep reinforcement learning in smart agriculture.
AbstractList Smart agriculture systems based on Internet of Things are the most promising to increase food production and reduce the consumption of resources like fresh water. In this study, we present a smart agriculture IoT system based on deep reinforcement learning which includes four layers, namely agricultural data collection layer, edge computing layer, agricultural data transmission layer, and cloud computing layer. The presented system integrates some advanced information techniques, especially artificial intelligence and cloud computing, with agricultural production to increase food production. Specially, the most advanced artificial intelligence model, deep reinforcement learning is combined in the cloud layer to make immediate smart decisions such as determining the amount of water needed to be irrigated for improving crop growth environment. We present several representative deep reinforcement learning models with their broad applications. Finally, we talk about the open challenges and the potential applications of deep reinforcement learning in smart agriculture IoT systems. •We design a smart agriculture IoT system based on an edge-cloud computing.•We present several representative deep reinforcement learning models.•We discuss the possible challenges and applications of deep reinforcement learning in smart agriculture.
Author Wang, Xin
Bu, Fanyu
Author_xml – sequence: 1
  givenname: Fanyu
  surname: Bu
  fullname: Bu, Fanyu
  email: bufanyu@imufe.edu.cn
  organization: College of Computer and Information Management, Inner Mongolia University of Finance and Economics, Hohhot, China
– sequence: 2
  givenname: Xin
  surname: Wang
  fullname: Wang, Xin
  organization: Center of Information and Network Technology, Inner Mongolia Agricultural University, Hohhot, China
BookMark eNqFkM1KAzEUhYNUsK2-gYu8wIz5mWk6LoRS_CkU3FRwFzI3NyVlmilJKvTtnVJXLhQO3M39DpxvQkahD0jIPWclZ3z2sCvdMR8jloLxpmTVEH5FxnyuRKE4r0dkPLypQsnm84ZMUtoxxriSfEyWC5r2JmZqttHDsTvX0FW_oemUMu5paxJa2gdqEQ80og-uj4B7DJl2aGLwYXtLrp3pEt793Cn5eHneLN-K9fvrarlYFyCVyEUtXGtta5pWWGg4CDuruLPNrFHWWWbmsgWcS9FYK6WRjreC1TVKBSBBGZBT8njphdinFNFp8Nlk34ccje80Z_qsQ-_0RYc-69CsGsIHuPoFH6Iflp_-w54uGA7DvjxGncBjALQ-ImRte_93wTen34Be
CitedBy_id crossref_primary_10_1002_ett_4463
crossref_primary_10_1016_j_future_2022_07_026
crossref_primary_10_1007_s11277_021_08970_7
crossref_primary_10_1016_j_scs_2021_102830
crossref_primary_10_3390_bioengineering10020125
crossref_primary_10_1108_BFJ_08_2021_0934
crossref_primary_10_1109_JIOT_2020_3045479
crossref_primary_10_1007_s11277_022_10016_5
crossref_primary_10_1016_j_arcontrol_2021_01_001
crossref_primary_10_3390_s22134874
crossref_primary_10_32604_cmc_2020_012517
crossref_primary_10_1002_ese3_1134
crossref_primary_10_1016_j_micpro_2023_104905
crossref_primary_10_1109_ACCESS_2024_3428401
crossref_primary_10_1007_s12652_021_03605_y
crossref_primary_10_3390_horticulturae10010049
crossref_primary_10_1109_ACCESS_2022_3232485
crossref_primary_10_1016_j_procs_2021_07_060
crossref_primary_10_1109_TITS_2023_3256563
crossref_primary_10_1002_gamm_202100007
crossref_primary_10_1109_JSEN_2021_3054561
crossref_primary_10_1111_exsy_13090
crossref_primary_10_1016_j_compag_2022_107119
crossref_primary_10_3390_computers11090135
crossref_primary_10_1155_2020_8090521
crossref_primary_10_1016_j_atech_2025_100848
crossref_primary_10_1109_COMST_2022_3151028
crossref_primary_10_1007_s42979_024_03319_w
crossref_primary_10_1016_j_techsoc_2020_101415
crossref_primary_10_1007_s12652_021_03685_w
crossref_primary_10_1016_j_compag_2022_107182
crossref_primary_10_1016_j_matpr_2021_03_480
crossref_primary_10_1109_ACCESS_2022_3199353
crossref_primary_10_1016_j_adapen_2022_100119
crossref_primary_10_1080_03772063_2023_2192000
crossref_primary_10_1016_j_seta_2022_102307
crossref_primary_10_1631_FITEE_2300668
crossref_primary_10_3390_agronomy12071643
crossref_primary_10_1016_j_iot_2020_100262
crossref_primary_10_1108_LHT_10_2022_0473
crossref_primary_10_1016_j_comnet_2019_107039
crossref_primary_10_1108_RIA_07_2024_0146
crossref_primary_10_2174_2666255815666220225102615
crossref_primary_10_1109_JAS_2021_1003925
crossref_primary_10_1109_TII_2022_3216295
crossref_primary_10_1016_j_phycom_2024_102460
crossref_primary_10_1142_S0219649224500989
crossref_primary_10_3390_computers11070104
crossref_primary_10_1002_agj2_21061
crossref_primary_10_1007_s11042_023_15442_6
crossref_primary_10_1142_S0219649221400062
crossref_primary_10_32604_cmc_2021_015568
crossref_primary_10_1016_j_neunet_2021_11_021
crossref_primary_10_1016_j_compag_2025_110028
crossref_primary_10_1038_s41598_022_18635_5
crossref_primary_10_3390_stats3030018
crossref_primary_10_1016_j_compag_2024_109032
crossref_primary_10_1007_s12652_020_02752_y
crossref_primary_10_2478_amns_2023_2_00175
crossref_primary_10_1016_j_micpro_2023_104894
crossref_primary_10_1088_1742_6596_2466_1_012028
crossref_primary_10_1515_jisys_2022_0012
crossref_primary_10_1016_j_aiia_2023_04_002
crossref_primary_10_33411_IJIST_2022040403
crossref_primary_10_1016_j_suscom_2023_100890
crossref_primary_10_1109_ACCESS_2022_3187528
crossref_primary_10_1109_TII_2023_3257299
crossref_primary_10_1016_j_jksuci_2022_06_017
crossref_primary_10_3390_s20051334
crossref_primary_10_1111_jfpe_14429
crossref_primary_10_3390_s24020495
crossref_primary_10_3390_su151411337
crossref_primary_10_1007_s10586_022_03599_y
crossref_primary_10_1016_j_compag_2023_108154
crossref_primary_10_3390_su141811487
crossref_primary_10_1007_s42979_021_00815_1
crossref_primary_10_1016_j_compag_2022_107608
crossref_primary_10_47115_bsagriculture_1536744
crossref_primary_10_1016_j_iswa_2023_200218
crossref_primary_10_1109_ACCESS_2020_2970143
crossref_primary_10_1016_j_eswa_2024_124740
crossref_primary_10_1016_j_engappai_2022_105116
crossref_primary_10_1109_OJVT_2024_3502803
crossref_primary_10_3390_make3040043
crossref_primary_10_1016_j_marpol_2022_105158
crossref_primary_10_61927_igmin210
crossref_primary_10_3390_electronics12051248
crossref_primary_10_1002_spe_3193
crossref_primary_10_3390_agriculture13101900
crossref_primary_10_46300_9106_2020_14_134
crossref_primary_10_1016_j_heliyon_2024_e29564
crossref_primary_10_1080_27685241_2021_2008777
crossref_primary_10_1016_j_future_2021_04_018
crossref_primary_10_1080_23270012_2023_2207184
crossref_primary_10_1007_s11831_022_09761_4
crossref_primary_10_31590_ejosat_1252946
crossref_primary_10_1016_j_procs_2024_06_103
crossref_primary_10_1186_s43067_024_00184_8
crossref_primary_10_3389_fsufs_2025_1551460
crossref_primary_10_1016_j_tpb_2021_06_002
crossref_primary_10_1016_j_agwat_2021_106838
crossref_primary_10_3390_electronics13101894
crossref_primary_10_1038_s41477_021_00946_6
crossref_primary_10_3390_rs14030638
crossref_primary_10_1109_JIOT_2021_3088875
crossref_primary_10_3390_su14031667
crossref_primary_10_1109_ACCESS_2024_3495032
crossref_primary_10_18178_joaat_6_4_241_245
crossref_primary_10_1007_s41870_022_01021_9
crossref_primary_10_1109_ACCESS_2024_3426279
crossref_primary_10_1109_TDSC_2021_3131991
crossref_primary_10_1111_exsy_12892
crossref_primary_10_1371_journal_pone_0246092
crossref_primary_10_3390_agronomy11081568
crossref_primary_10_1016_j_iot_2022_100580
crossref_primary_10_1108_JSTPM_09_2021_0130
crossref_primary_10_1109_JSEN_2021_3049471
crossref_primary_10_1016_j_envsci_2023_103600
crossref_primary_10_1109_ACCESS_2020_3033557
crossref_primary_10_1002_int_22756
crossref_primary_10_1016_j_inpa_2023_08_006
crossref_primary_10_1016_j_egyr_2020_09_022
crossref_primary_10_1109_COMST_2023_3338153
crossref_primary_10_1016_j_plaphy_2023_108051
crossref_primary_10_2174_18743315_v17_e230404_2022_53
crossref_primary_10_1088_1742_6596_1767_1_012026
crossref_primary_10_1142_S0217984920504187
crossref_primary_10_1007_s42853_023_00192_y
crossref_primary_10_1002_prs_12150
crossref_primary_10_1016_j_compeleceng_2022_108089
crossref_primary_10_1016_j_cosrev_2020_100303
crossref_primary_10_1109_ACCESS_2020_2992480
crossref_primary_10_1016_j_plana_2024_100079
crossref_primary_10_1109_JIOT_2021_3131524
crossref_primary_10_1016_j_inpa_2020_10_004
crossref_primary_10_1007_s42979_023_02085_5
crossref_primary_10_1590_1678_4162_12659
crossref_primary_10_1016_j_sna_2023_114605
crossref_primary_10_1080_10429247_2024_2407254
crossref_primary_10_1016_j_bcra_2025_100276
crossref_primary_10_3390_s19173667
crossref_primary_10_4108_eetiot_5363
crossref_primary_10_1016_j_compag_2021_106495
crossref_primary_10_1109_MM_2021_3137401
crossref_primary_10_32604_jiot_2023_039391
crossref_primary_10_1007_s42853_020_00078_3
crossref_primary_10_1016_j_scitotenv_2021_148539
crossref_primary_10_1155_2022_9042382
crossref_primary_10_3390_electronics12102336
crossref_primary_10_1007_s10489_021_02884_4
crossref_primary_10_1109_ACCESS_2023_3292302
crossref_primary_10_3390_s21227502
crossref_primary_10_1109_ACCESS_2024_3392338
crossref_primary_10_3389_fpls_2024_1284861
crossref_primary_10_1016_j_aac_2022_10_001
crossref_primary_10_1155_2021_7179374
crossref_primary_10_1155_2020_8818616
crossref_primary_10_1093_jcde_qwac087
crossref_primary_10_3390_s23073752
Cites_doi 10.1109/TII.2018.2808910
10.1016/j.future.2013.01.010
10.1613/jair.806
10.1109/TBDATA.2019.2903092
10.1016/j.jnca.2018.12.012
10.1016/j.future.2019.02.068
10.1145/3110218
10.1038/nature14236
10.1109/MCC.2017.5
10.1038/nature14540
10.1145/2886779
10.1007/BF00992696
10.1109/TC.2015.2470255
ContentType Journal Article
Copyright 2019 Elsevier B.V.
Copyright_xml – notice: 2019 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.future.2019.04.041
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-7115
EndPage 507
ExternalDocumentID 10_1016_j_future_2019_04_041
S0167739X19307277
GroupedDBID --K
--M
-~X
.DC
.~1
0R~
1B1
1~.
1~5
29H
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABFNM
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
AEBSH
AEKER
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
CS3
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
IHE
J1W
KOM
LG9
M41
MO0
MS~
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SDF
SDG
SES
SEW
SPC
SPCBC
SSV
SSZ
T5K
UHS
WUQ
XPP
ZMT
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ADNMO
AEIPS
AFJKZ
AFXIZ
AGCQF
AGQPQ
AGRNS
AIIUN
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c372t-52fbddba9b2dc91c2d641fd9697dfd0a83bce8329dd33a3f1b2055e37cc3c7ac3
IEDL.DBID .~1
ISSN 0167-739X
IngestDate Tue Jul 01 01:42:40 EDT 2025
Thu Apr 24 22:56:44 EDT 2025
Fri Feb 23 02:30:14 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Smart agriculture IoT
Cloud computing
Deep reinforcement learning
Edge computing
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c372t-52fbddba9b2dc91c2d641fd9697dfd0a83bce8329dd33a3f1b2055e37cc3c7ac3
PageCount 8
ParticipantIDs crossref_citationtrail_10_1016_j_future_2019_04_041
crossref_primary_10_1016_j_future_2019_04_041
elsevier_sciencedirect_doi_10_1016_j_future_2019_04_041
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate October 2019
2019-10-00
PublicationDateYYYYMMDD 2019-10-01
PublicationDate_xml – month: 10
  year: 2019
  text: October 2019
PublicationDecade 2010
PublicationTitle Future generation computer systems
PublicationYear 2019
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Oh, Chockalingam, Singh, Lee (b33) 2016
Heess, Wayne, Silver, Lillicrap, Tassa, Erez (b21) 2015
A.A. Rusu, N.C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive Neural Networks, (2016)
Baxter, Bartlett (b16) 2001; 15
Zhang, Zhong, Yang, Chen, Bu (b9) 2016; 12
A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent Cooperation and Competition with Deep Reinforcement Learning, (2015)
Liu, Yao, Yu, Wu (b6) 2019; 97
Mnih, Kavukcuoglu, Silver, Rusu, Veness, Bellemare, Graves, Riedmiller, Fidjeland, Ostrovski, Peterson, Beasttie, Sadik, Antonoglou, King, Kumaran, Wierstra, Legg, Hassabis (b10) 2015; 518
Zhang, Bai, Chen, Li, Wang, Gao (b13) 2019; 129
Zhang, Yang, Yan, Chen, Li (b42) 2018; 14
A.A. Rusu, S.G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, R. Hadsell, Policy Distrillation, (2015)
J.N. Foerster, Y.M. Assael, N. de Freitas, S. Whiteson, Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks, (2016)
Sukhbaatar, Szlam, Weston, Fergus (b32) 2015
Roopaei, Rad, Choo (b2) 2017; 4
Littman (b12) 2015; 521
Levine, Finn, Darrell, Abbeel (b18) 2016; 17
Gubbi, Buyya, Marusic, Palaniswami (b4) 2013; 29
Silver, Lever, Heess, Degris, Wierstra, Riedmiller (b17) 2014
Zhang, Yang, Chen, Li (b11) 2019
Bhatnagar, Sutton, Ghavamzadeh, Lee (b19) 2008
A. Graves, G. Wayne, I. Danihelka, Neural Turing Machines, (2014)
Satija, Mcgill, Pineau (b40) 2016
Zhang, Bai, Chen, Li, Yu, Wang, Gao (b7) 2019
D. Balduzzi, M. Ghifary, Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies, (2015)
Huang, Lin, Zhang (b5) 2017
Zhang, Yang, Chen, Li, Bu (b3) 2018
Calandriello, lazaric, Restelli (b24) 2014
Parisotto, Ba, Salakhutdinov (b25) 2016
Zhang, McCarthy, Finn, Levine, Abbeel (b34) 2016
Rao, Lu, Zhou (b36) 2017
Das, Kottur, Moura, Lee, Batra (b38) 2017
.
Zhang, Yang, Chen (b8) 2016; 65
T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra, Continuous Control with Deep Reinforcement Learning, (2015)
Li, Liao, Carin (b27) 2009; 10
Zhang, Yang, Chen, Li (b41) 2018; 3
Willaims (b15) 1992; 8
Zhang, Lin, Yang, Chen, Khan, Li (b37) 2018
Finn, Levine, Abbeel (b35) 2016
Mnih, Badia, Mirza, Graves, Harley, Lillicrap, Silver, Kavukcuoglu (b23) 2016
Gondchawar, Kawitkar (b1) 2016; 5
Sutton, Mcallester, Singh, Mansour (b14) 2000
Li, Monroe, Ritter, Galley, Gao, Jurafsky (b39) 2016
Huang (10.1016/j.future.2019.04.041_b5) 2017
Littman (10.1016/j.future.2019.04.041_b12) 2015; 521
Li (10.1016/j.future.2019.04.041_b39) 2016
Gubbi (10.1016/j.future.2019.04.041_b4) 2013; 29
Zhang (10.1016/j.future.2019.04.041_b37) 2018
Zhang (10.1016/j.future.2019.04.041_b7) 2019
Baxter (10.1016/j.future.2019.04.041_b16) 2001; 15
Sutton (10.1016/j.future.2019.04.041_b14) 2000
10.1016/j.future.2019.04.041_b31
10.1016/j.future.2019.04.041_b30
Li (10.1016/j.future.2019.04.041_b27) 2009; 10
Liu (10.1016/j.future.2019.04.041_b6) 2019; 97
Mnih (10.1016/j.future.2019.04.041_b10) 2015; 518
Zhang (10.1016/j.future.2019.04.041_b8) 2016; 65
Silver (10.1016/j.future.2019.04.041_b17) 2014
Zhang (10.1016/j.future.2019.04.041_b42) 2018; 14
Satija (10.1016/j.future.2019.04.041_b40) 2016
Zhang (10.1016/j.future.2019.04.041_b3) 2018
Zhang (10.1016/j.future.2019.04.041_b11) 2019
Sukhbaatar (10.1016/j.future.2019.04.041_b32) 2015
Levine (10.1016/j.future.2019.04.041_b18) 2016; 17
Zhang (10.1016/j.future.2019.04.041_b13) 2019; 129
Zhang (10.1016/j.future.2019.04.041_b9) 2016; 12
Das (10.1016/j.future.2019.04.041_b38) 2017
Heess (10.1016/j.future.2019.04.041_b21) 2015
Mnih (10.1016/j.future.2019.04.041_b23) 2016
Willaims (10.1016/j.future.2019.04.041_b15) 1992; 8
10.1016/j.future.2019.04.041_b26
Gondchawar (10.1016/j.future.2019.04.041_b1) 2016; 5
10.1016/j.future.2019.04.041_b20
Oh (10.1016/j.future.2019.04.041_b33) 2016
10.1016/j.future.2019.04.041_b22
Calandriello (10.1016/j.future.2019.04.041_b24) 2014
Rao (10.1016/j.future.2019.04.041_b36) 2017
Roopaei (10.1016/j.future.2019.04.041_b2) 2017; 4
Zhang (10.1016/j.future.2019.04.041_b34) 2016
Bhatnagar (10.1016/j.future.2019.04.041_b19) 2008
10.1016/j.future.2019.04.041_b28
Finn (10.1016/j.future.2019.04.041_b35) 2016
Zhang (10.1016/j.future.2019.04.041_b41) 2018; 3
Parisotto (10.1016/j.future.2019.04.041_b25) 2016
10.1016/j.future.2019.04.041_b29
References_xml – volume: 129
  start-page: 1
  year: 2019
  end-page: 8
  ident: b13
  article-title: Smart chinese medicine for hypertension treatment with a deep learning model
  publication-title: J. Netw. Comput. Appl.
– start-page: 2790
  year: 2016
  end-page: 2799
  ident: b33
  article-title: Control of memory, active perception, and action in minecraft
  publication-title: Proceedings of International Conference on Machine Learning
– year: 2018
  ident: b37
  article-title: A double deep Q-learning model for energy-efficient edge scheduling
  publication-title: IEEE Trans. Serv. Comput.
– start-page: 110
  year: 2016
  end-page: 119
  ident: b40
  article-title: Simultaneous machine translation using deep reinforcement learning
  publication-title: Proceedings of the Workshops of International Conference on Machine Learning
– start-page: 819
  year: 2014
  end-page: 827
  ident: b24
  article-title: Sparse multi-task reinforcement learning
  publication-title: Proceedings of Advances in Neural Information Processing Systems
– volume: 14
  start-page: 3170
  year: 2018
  end-page: 3178
  ident: b42
  article-title: An efficient deep learning model to predict cloud workload for industry informatics
  publication-title: IEEE Trans. Ind. Inf.
– start-page: 2970
  year: 2017
  end-page: 2979
  ident: b38
  article-title: Learning cooperative visual dialog agents with deep reinforcement learning
  publication-title: Proceedings of IEEE International Conference on Computer Vision
– volume: 518
  start-page: 529
  year: 2015
  end-page: 533
  ident: b10
  article-title: Human-level control through deep reinforcement learning
  publication-title: Nature
– start-page: 1928
  year: 2016
  end-page: 1937
  ident: b23
  article-title: Asynchronous methods for deep reinforcement learning
  publication-title: Proceedings of International Conference on Machine Learning
– year: 2019
  ident: b11
  article-title: Incremental deep computation model for wireless big data feature learning
  publication-title: IEEE Trans. Big Data
– year: 2017
  ident: b5
  article-title: Double-Q learning-based DVFS for multi-core real-time systems
  publication-title: Proceedings of IEEE International Conference on Green Computing and Communications
– reference: A. Graves, G. Wayne, I. Danihelka, Neural Turing Machines, (2014),
– reference: A.A. Rusu, N.C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive Neural Networks, (2016),
– volume: 3
  start-page: 11
  year: 2018
  ident: b41
  article-title: Dependable deep computation model for feature learning on big data in cyber-physical systems
  publication-title: ACM Trans. Cyber-Phys. Syst.
– volume: 4
  start-page: 10
  year: 2017
  end-page: 15
  ident: b2
  article-title: Cloud of things in smart agriculture: Intelligent irrigation monitoring by thermal imaging
  publication-title: IEEE Cloud Comput.
– start-page: 1192
  year: 2016
  end-page: 1202
  ident: b39
  article-title: Deep reinforcement learning for dialogue generation
  publication-title: Proceedings of the Conference on Empirical Methods in Natural Language Processing
– volume: 5
  start-page: 838
  year: 2016
  end-page: 842
  ident: b1
  article-title: IoT based smart agriculture
  publication-title: Int. J. Adv. Res. Comput. Commun. Eng.
– volume: 65
  start-page: 1351
  year: 2016
  end-page: 1362
  ident: b8
  article-title: Privacy preserving deep computation model on cloud for big data feature learning
  publication-title: IEEE Trans. Comput.
– volume: 8
  start-page: 229
  year: 1992
  end-page: 256
  ident: b15
  article-title: Simple statistical gradient-following algorithms for connectionist reinforcement learning
  publication-title: Mach. Learn.
– start-page: 1057
  year: 2000
  end-page: 1063
  ident: b14
  article-title: Policy gradient methods for reinforcement learning with function approximation
  publication-title: Proceedings of Advances on Neural Information Processing Systems
– start-page: 105
  year: 2008
  end-page: 112
  ident: b19
  article-title: Incremental natural actor-critic algorithms
  publication-title: Proceedings of Advances in Neural Information Processing Systems
– start-page: 2944
  year: 2015
  end-page: 2952
  ident: b21
  article-title: Learning continuous control policies by stochastic value gradients
  publication-title: Proceedings of Advances in Neural Information Processing Systems
– volume: 17
  start-page: 1
  year: 2016
  end-page: 40
  ident: b18
  article-title: End-to-end training of deep visuomotor policies
  publication-title: J. Mach. Learn. Res.
– reference: A.A. Rusu, S.G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, R. Hadsell, Policy Distrillation, (2015),
– start-page: 49
  year: 2016
  end-page: 58
  ident: b35
  article-title: Guided cost learning: Deep inverse optimal control via policy optimization
  publication-title: Proceedings of International Conference on Machine Learning
– reference: T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra, Continuous Control with Deep Reinforcement Learning, (2015),
– start-page: 3951
  year: 2017
  end-page: 3960
  ident: b36
  article-title: Attention-aware deep reinforcement learning for video face recognition
  publication-title: Proceedings of IEEE International Conference on Computer Vision
– reference: .
– volume: 10
  start-page: 1131
  year: 2009
  end-page: 1186
  ident: b27
  article-title: Multi-task reinforcement learning in partially observable stochastic environment
  publication-title: J. Mach. Learn. Res.
– reference: A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent Cooperation and Competition with Deep Reinforcement Learning, (2015),
– year: 2019
  ident: b7
  article-title: Deep learning models for diagnosing spleen and stomach diseases in smart chinese medicine with cloud computing
  publication-title: Concurr. Comput.: Pract. Exper.
– volume: 15
  start-page: 319
  year: 2001
  end-page: 350
  ident: b16
  article-title: Infinite-horizon policy-gradient estimation
  publication-title: J. Artificial Intelligence Res.
– year: 2018
  ident: b3
  article-title: An adaptive droupout deep computation model for industrial IoT big data learning with crowdsourcing to cloud computing
  publication-title: IEEE Trans. Ind. Inf.
– reference: D. Balduzzi, M. Ghifary, Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies, (2015),
– reference: J.N. Foerster, Y.M. Assael, N. de Freitas, S. Whiteson, Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks, (2016),
– start-page: 2440
  year: 2015
  end-page: 2448
  ident: b32
  article-title: End-to-end memory networks
  publication-title: Proceedins of Advances in Neural Information Processing Systems
– volume: 29
  start-page: 1645
  year: 2013
  end-page: 1660
  ident: b4
  article-title: Internet of things (IoT): A vision, architectural elements, and future directions
  publication-title: Future Gener. Comput. Syst.
– volume: 521
  start-page: 445
  year: 2015
  end-page: 451
  ident: b12
  article-title: Reinforcement learning improves behaviour from evaluative feedback
  publication-title: Nature
– start-page: 156
  year: 2016
  end-page: 171
  ident: b25
  article-title: Actor-mimic deep multitask and transfer reinforcement learning
  publication-title: Proceedings of International Conference on Learning Representations
– volume: 97
  start-page: 1
  year: 2019
  end-page: 9
  ident: b6
  article-title: Deep reinforcement learning with its application for lung cancer detection in medical internet of things
  publication-title: Future Gener. Comput. Syst.
– volume: 12
  start-page: 66
  year: 2016
  ident: b9
  article-title: PPHOCFS: Privacy preserving high-order CFS algorithm on the cloud for clustering multimedia data
  publication-title: ACM Trans. Multimedia Comput. Commun. Appl.
– start-page: 520
  year: 2016
  end-page: 527
  ident: b34
  article-title: Learning deep neural network policies with continuous memory states
  publication-title: Proceedings of IEEE International Conference on Robotics and Automation
– start-page: 387
  year: 2014
  end-page: 395
  ident: b17
  article-title: Deterministic policy gradient algorithms
  publication-title: Proceedings of International Conference on Machine Learning
– ident: 10.1016/j.future.2019.04.041_b28
– ident: 10.1016/j.future.2019.04.041_b30
– volume: 14
  start-page: 3170
  issue: 7
  year: 2018
  ident: 10.1016/j.future.2019.04.041_b42
  article-title: An efficient deep learning model to predict cloud workload for industry informatics
  publication-title: IEEE Trans. Ind. Inf.
  doi: 10.1109/TII.2018.2808910
– start-page: 819
  year: 2014
  ident: 10.1016/j.future.2019.04.041_b24
  article-title: Sparse multi-task reinforcement learning
– ident: 10.1016/j.future.2019.04.041_b26
– volume: 17
  start-page: 1
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b18
  article-title: End-to-end training of deep visuomotor policies
  publication-title: J. Mach. Learn. Res.
– volume: 29
  start-page: 1645
  issue: 7
  year: 2013
  ident: 10.1016/j.future.2019.04.041_b4
  article-title: Internet of things (IoT): A vision, architectural elements, and future directions
  publication-title: Future Gener. Comput. Syst.
  doi: 10.1016/j.future.2013.01.010
– volume: 15
  start-page: 319
  year: 2001
  ident: 10.1016/j.future.2019.04.041_b16
  article-title: Infinite-horizon policy-gradient estimation
  publication-title: J. Artificial Intelligence Res.
  doi: 10.1613/jair.806
– ident: 10.1016/j.future.2019.04.041_b22
– start-page: 3951
  year: 2017
  ident: 10.1016/j.future.2019.04.041_b36
  article-title: Attention-aware deep reinforcement learning for video face recognition
– year: 2019
  ident: 10.1016/j.future.2019.04.041_b11
  article-title: Incremental deep computation model for wireless big data feature learning
  publication-title: IEEE Trans. Big Data
  doi: 10.1109/TBDATA.2019.2903092
– volume: 129
  start-page: 1
  year: 2019
  ident: 10.1016/j.future.2019.04.041_b13
  article-title: Smart chinese medicine for hypertension treatment with a deep learning model
  publication-title: J. Netw. Comput. Appl.
  doi: 10.1016/j.jnca.2018.12.012
– volume: 97
  start-page: 1
  year: 2019
  ident: 10.1016/j.future.2019.04.041_b6
  article-title: Deep reinforcement learning with its application for lung cancer detection in medical internet of things
  publication-title: Future Gener. Comput. Syst.
  doi: 10.1016/j.future.2019.02.068
– ident: 10.1016/j.future.2019.04.041_b20
– start-page: 2790
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b33
  article-title: Control of memory, active perception, and action in minecraft
– year: 2018
  ident: 10.1016/j.future.2019.04.041_b37
  article-title: A double deep Q-learning model for energy-efficient edge scheduling
  publication-title: IEEE Trans. Serv. Comput.
– volume: 3
  start-page: 11
  issue: 1
  year: 2018
  ident: 10.1016/j.future.2019.04.041_b41
  article-title: Dependable deep computation model for feature learning on big data in cyber-physical systems
  publication-title: ACM Trans. Cyber-Phys. Syst.
  doi: 10.1145/3110218
– volume: 518
  start-page: 529
  issue: 7540
  year: 2015
  ident: 10.1016/j.future.2019.04.041_b10
  article-title: Human-level control through deep reinforcement learning
  publication-title: Nature
  doi: 10.1038/nature14236
– start-page: 1057
  year: 2000
  ident: 10.1016/j.future.2019.04.041_b14
  article-title: Policy gradient methods for reinforcement learning with function approximation
– volume: 4
  start-page: 10
  issue: 1
  year: 2017
  ident: 10.1016/j.future.2019.04.041_b2
  article-title: Cloud of things in smart agriculture: Intelligent irrigation monitoring by thermal imaging
  publication-title: IEEE Cloud Comput.
  doi: 10.1109/MCC.2017.5
– year: 2018
  ident: 10.1016/j.future.2019.04.041_b3
  article-title: An adaptive droupout deep computation model for industrial IoT big data learning with crowdsourcing to cloud computing
  publication-title: IEEE Trans. Ind. Inf.
– volume: 10
  start-page: 1131
  year: 2009
  ident: 10.1016/j.future.2019.04.041_b27
  article-title: Multi-task reinforcement learning in partially observable stochastic environment
  publication-title: J. Mach. Learn. Res.
– start-page: 387
  year: 2014
  ident: 10.1016/j.future.2019.04.041_b17
  article-title: Deterministic policy gradient algorithms
– volume: 521
  start-page: 445
  issue: 7553
  year: 2015
  ident: 10.1016/j.future.2019.04.041_b12
  article-title: Reinforcement learning improves behaviour from evaluative feedback
  publication-title: Nature
  doi: 10.1038/nature14540
– volume: 12
  start-page: 66
  issue: 4s
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b9
  article-title: PPHOCFS: Privacy preserving high-order CFS algorithm on the cloud for clustering multimedia data
  publication-title: ACM Trans. Multimedia Comput. Commun. Appl.
  doi: 10.1145/2886779
– start-page: 2440
  year: 2015
  ident: 10.1016/j.future.2019.04.041_b32
  article-title: End-to-end memory networks
– start-page: 110
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b40
  article-title: Simultaneous machine translation using deep reinforcement learning
– volume: 5
  start-page: 838
  issue: 6
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b1
  article-title: IoT based smart agriculture
  publication-title: Int. J. Adv. Res. Comput. Commun. Eng.
– ident: 10.1016/j.future.2019.04.041_b31
– ident: 10.1016/j.future.2019.04.041_b29
– start-page: 1192
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b39
  article-title: Deep reinforcement learning for dialogue generation
– start-page: 49
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b35
  article-title: Guided cost learning: Deep inverse optimal control via policy optimization
– year: 2017
  ident: 10.1016/j.future.2019.04.041_b5
  article-title: Double-Q learning-based DVFS for multi-core real-time systems
– year: 2019
  ident: 10.1016/j.future.2019.04.041_b7
  article-title: Deep learning models for diagnosing spleen and stomach diseases in smart chinese medicine with cloud computing
  publication-title: Concurr. Comput.: Pract. Exper.
– volume: 8
  start-page: 229
  year: 1992
  ident: 10.1016/j.future.2019.04.041_b15
  article-title: Simple statistical gradient-following algorithms for connectionist reinforcement learning
  publication-title: Mach. Learn.
  doi: 10.1007/BF00992696
– start-page: 105
  year: 2008
  ident: 10.1016/j.future.2019.04.041_b19
  article-title: Incremental natural actor-critic algorithms
– start-page: 1928
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b23
  article-title: Asynchronous methods for deep reinforcement learning
– start-page: 2970
  year: 2017
  ident: 10.1016/j.future.2019.04.041_b38
  article-title: Learning cooperative visual dialog agents with deep reinforcement learning
– start-page: 2944
  year: 2015
  ident: 10.1016/j.future.2019.04.041_b21
  article-title: Learning continuous control policies by stochastic value gradients
– start-page: 156
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b25
  article-title: Actor-mimic deep multitask and transfer reinforcement learning
– start-page: 520
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b34
  article-title: Learning deep neural network policies with continuous memory states
– volume: 65
  start-page: 1351
  issue: 5
  year: 2016
  ident: 10.1016/j.future.2019.04.041_b8
  article-title: Privacy preserving deep computation model on cloud for big data feature learning
  publication-title: IEEE Trans. Comput.
  doi: 10.1109/TC.2015.2470255
SSID ssj0001731
Score 2.6108298
Snippet Smart agriculture systems based on Internet of Things are the most promising to increase food production and reduce the consumption of resources like fresh...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 500
SubjectTerms Cloud computing
Deep reinforcement learning
Edge computing
Smart agriculture IoT
Title A smart agriculture IoT system based on deep reinforcement learning
URI https://dx.doi.org/10.1016/j.future.2019.04.041
Volume 99
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEB6KXrz4Fuuj5OB17eaxm-ZYiqVV7MUWels2mWypaFtqvfrbTXazPkAUhL3skoHwMZmZLN98A3CFwlKjGUZapBiJNDGRtp0kyjkzlBvDpfLNyfejdDARt9Nk2oBe3QvjaZUh9lcxvYzW4Us7oNlezeftB0-gl1xNXQkSuyzsO8qFkN7Lr98-aR5UhpmELiD41XX7XMnxqnQ7PMFLlYKngv6cnr6knP4-7IZakXSr7RxAwy4OYa-ew0DCsTyCXpe8PLtNk3y2DlIalgyXY1LJNBOfqZAsFwStXZG1LcVSTflfkISpEbNjmPRvxr1BFIYjRA4_tnEXyEIj6lw5oI2ihmEqaIEqVRILjPMO18a646oQOc95QTWLk8Ry6fA3Mjf8BLYWy4U9BdIxLDbK3_UYFxzTvEgSdAFZF8wWDGUTeI1JZoJyuB9g8ZTVFLHHrEIy80hmsXAPbUL0YbWqlDP-WC9ruLNvHpC54P6r5dm_Lc9hx79V5LwL2NqsX-2lKzI2ulV6UQu2u8O7wegduE_UXw
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwEB1BOcCFHbHjAxxDE9uJ6wOHikUt24Ui9RbiJQgEbVWKEBd-ih9knDgsEgIJCSmnJJM4L6MZj_X8BmDbcBtpRU2geGICnsQ6ULYRBxmjOmJaMyHd5uSz86R1yY-7cXcMXqu9MI5W6WN_GdOLaO3P1D2a9cHNTf3CEegFk12cgoSYhYVnVp7Y5yes2x722gf4k3coPTrs7LcC31ogwLfTEZZfuTJGZRKHqWWkqUl4lBuZSGFyE2YNprRFZ5fGMJaxPFI0jGPLBI5ei0wzfO44THAMF65twu7LB68kEr4JIkYgN7xqv15BKiuFQhyjTBYKqzz6Ph9-ynFHszDtJ6ekWX7_HIzZ3jzMVI0fiI8DC7DfJA_3iBLJrodeu8OSdr9DSl1o4lKjIf0eMdYOyNAW6qy6WIgkvk3F9SJc_gtkS1Dr9Xt2GUhD01BLV1xSxplJsjyODWYAlVObUyNWgFWYpNpLlbuOGXdpxUm7TUskU4dkGnI8ohUI3q0GpVTHL_eLCu70i8ulmE1-tFz9s-UWTLY6Z6fpafv8ZA2m3JWSGbgOtdHw0W7gDGekNguPInD13y78BjiHEmI
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+smart+agriculture+IoT+system+based+on+deep+reinforcement+learning&rft.jtitle=Future+generation+computer+systems&rft.au=Bu%2C+Fanyu&rft.au=Wang%2C+Xin&rft.date=2019-10-01&rft.pub=Elsevier+B.V&rft.issn=0167-739X&rft.eissn=1872-7115&rft.volume=99&rft.spage=500&rft.epage=507&rft_id=info:doi/10.1016%2Fj.future.2019.04.041&rft.externalDocID=S0167739X19307277
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-739X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-739X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-739X&client=summon