A Deep Reinforcement Learning Based Offloading Game in Edge Computing

Edge computing is a new paradigm to provide strong computing capability at the edge of pervasive radio access networks close to users. A critical research challenge of edge computing is to design an efficient offloading strategy to decide which tasks can be offloaded to edge servers with limited res...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on computers Vol. 69; no. 6; pp. 883 - 893
Main Authors Zhan, Yufeng, Guo, Song, Li, Peng, Zhang, Jiang
Format Journal Article
LanguageEnglish
Published New York IEEE 01.06.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9340
1557-9956
DOI10.1109/TC.2020.2969148

Cover

Loading…
Abstract Edge computing is a new paradigm to provide strong computing capability at the edge of pervasive radio access networks close to users. A critical research challenge of edge computing is to design an efficient offloading strategy to decide which tasks can be offloaded to edge servers with limited resources. Although many research efforts attempt to address this challenge, they need centralized control, which is not practical because users are rational individuals with interests to maximize their benefits. In this article, we study to design a decentralized algorithm for computation offloading, so that users can independently choose their offloading decisions. Game theory has been applied in the algorithm design. Different from existing work, we address the challenge that users may refuse to expose their information about network bandwidth and preference. Therefore, it requires that our solution should make the offloading decision without such knowledge. We formulate the problem as a partially observable Markov decision process (POMDP), which is solved by a policy gradient deep reinforcement learning (DRL) based approach. Extensive simulation results show that our proposal significantly outperforms existing solutions.
AbstractList Edge computing is a new paradigm to provide strong computing capability at the edge of pervasive radio access networks close to users. A critical research challenge of edge computing is to design an efficient offloading strategy to decide which tasks can be offloaded to edge servers with limited resources. Although many research efforts attempt to address this challenge, they need centralized control, which is not practical because users are rational individuals with interests to maximize their benefits. In this article, we study to design a decentralized algorithm for computation offloading, so that users can independently choose their offloading decisions. Game theory has been applied in the algorithm design. Different from existing work, we address the challenge that users may refuse to expose their information about network bandwidth and preference. Therefore, it requires that our solution should make the offloading decision without such knowledge. We formulate the problem as a partially observable Markov decision process (POMDP), which is solved by a policy gradient deep reinforcement learning (DRL) based approach. Extensive simulation results show that our proposal significantly outperforms existing solutions.
Author Zhan, Yufeng
Zhang, Jiang
Li, Peng
Guo, Song
Author_xml – sequence: 1
  givenname: Yufeng
  surname: Zhan
  fullname: Zhan, Yufeng
  email: zhanyf1989@gmail.com
  organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong
– sequence: 2
  givenname: Song
  orcidid: 0000-0001-9831-2202
  surname: Guo
  fullname: Guo, Song
  email: song.guo@polyu.edu.cn
  organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong
– sequence: 3
  givenname: Peng
  orcidid: 0000-0003-4981-0496
  surname: Li
  fullname: Li, Peng
  email: pengli@u-aizu.ac.jp
  organization: School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Japan
– sequence: 4
  givenname: Jiang
  surname: Zhang
  fullname: Zhang, Jiang
  email: bitzj2015@outlook.com
  organization: School of Automation, Beijing Institute of Technology, Beijing, China
BookMark eNp9kMFLwzAYxYMouE3PHrwEPHf7krRNcpx1TmEwkHkOWftlZKxpTbuD_70dGx48ePrg8X7vfbwxuQ5NQEIeGEwZAz3bFFMOHKZc55ql6oqMWJbJROssvyYjAKYSLVK4JeOu2wNAzkGPyGJOXxBb-oE-uCaWWGPo6QptDD7s6LPtsKJr5w6NrU7C0tZIfaCLaoe0aOr22A_yHblx9tDh_eVOyOfrYlO8Jav18r2Yr5JSZFmfOIXCVqVA4CoVskxlbi0oJkBV-Zbp1PLhReW2rHJQCu0sMCGkdlxLlJCKCXk657ax-Tpi15t9c4xhqDQ8BcYElxkfXLOzq4xN10V0po2-tvHbMDCnrcymMKetzGWrgcj-EKXvbe-b0EfrD_9wj2fOI-Jvi9K5ZEyJH0PJdVg
CODEN ITCOB4
CitedBy_id crossref_primary_10_3390_electronics13122387
crossref_primary_10_1109_JIOT_2023_3247013
crossref_primary_10_1016_j_suscom_2025_101099
crossref_primary_10_1109_TKDE_2021_3130265
crossref_primary_10_1155_2022_7844719
crossref_primary_10_1109_TNSM_2023_3267809
crossref_primary_10_2139_ssrn_4156476
crossref_primary_10_1109_JIOT_2022_3157677
crossref_primary_10_1016_j_asoc_2021_108361
crossref_primary_10_1109_JIOT_2021_3078620
crossref_primary_10_1109_TMC_2021_3053136
crossref_primary_10_1145_3639824
crossref_primary_10_1002_spe_3301
crossref_primary_10_1109_TWC_2023_3306880
crossref_primary_10_1007_s10586_022_03768_z
crossref_primary_10_1109_TSC_2024_3478826
crossref_primary_10_1109_TNSE_2021_3115054
crossref_primary_10_1109_JSYST_2022_3190926
crossref_primary_10_1109_COMST_2024_3353265
crossref_primary_10_1109_TMC_2024_3506221
crossref_primary_10_1049_tje2_12207
crossref_primary_10_1109_MNET_2024_3352031
crossref_primary_10_1145_3715695
crossref_primary_10_1109_TC_2021_3074806
crossref_primary_10_1109_JIOT_2024_3391296
crossref_primary_10_3390_fi15120391
crossref_primary_10_1016_j_phycom_2024_102460
crossref_primary_10_1109_TMC_2021_3096846
crossref_primary_10_1109_ACCESS_2023_3322650
crossref_primary_10_1109_JIOT_2022_3146239
crossref_primary_10_1109_LCOMM_2021_3075690
crossref_primary_10_1109_TC_2020_2993561
crossref_primary_10_1016_j_future_2022_11_025
crossref_primary_10_1109_TC_2023_3238138
crossref_primary_10_1109_JIOT_2022_3176289
crossref_primary_10_3390_electronics11091357
crossref_primary_10_1007_s40747_023_01322_x
crossref_primary_10_1109_JIOT_2023_3265344
crossref_primary_10_1109_ACCESS_2021_3109132
crossref_primary_10_1109_JIOT_2022_3189445
crossref_primary_10_1016_j_comnet_2021_108523
crossref_primary_10_1109_TWC_2022_3230407
crossref_primary_10_1109_JIOT_2020_3042433
crossref_primary_10_1109_TCCN_2021_3103511
crossref_primary_10_1016_j_future_2022_11_017
crossref_primary_10_1109_JIOT_2023_3315770
crossref_primary_10_1109_TNSE_2022_3188921
crossref_primary_10_3390_electronics13142747
crossref_primary_10_1016_j_compeleceng_2022_108552
crossref_primary_10_1016_j_compeleceng_2022_108278
crossref_primary_10_1109_JIOT_2022_3209987
crossref_primary_10_1109_COMST_2022_3199544
crossref_primary_10_1109_JIOT_2023_3332401
crossref_primary_10_1109_COMST_2021_3106401
crossref_primary_10_1109_TC_2022_3176803
crossref_primary_10_1109_TNSM_2024_3447753
crossref_primary_10_1016_j_adhoc_2023_103178
crossref_primary_10_1007_s11390_023_2839_0
crossref_primary_10_1109_JIOT_2024_3360183
crossref_primary_10_1109_TCC_2024_3381646
crossref_primary_10_1016_j_phycom_2022_101867
crossref_primary_10_1109_TC_2021_3131040
crossref_primary_10_1109_TWC_2023_3325654
crossref_primary_10_1186_s13677_022_00340_3
crossref_primary_10_1109_TII_2022_3227652
crossref_primary_10_1016_j_comnet_2024_110564
crossref_primary_10_1109_TNSE_2023_3283410
crossref_primary_10_1109_JIOT_2021_3091508
crossref_primary_10_1109_OJCOMS_2024_3426278
crossref_primary_10_1109_TVT_2024_3427814
crossref_primary_10_1109_TSC_2024_3495503
crossref_primary_10_1007_s00607_025_01443_w
crossref_primary_10_1109_TC_2021_3099723
crossref_primary_10_1049_tje2_12250
crossref_primary_10_1145_3464419
crossref_primary_10_1109_TWC_2022_3152573
crossref_primary_10_1109_TNSM_2023_3250395
crossref_primary_10_1109_TC_2024_3355767
crossref_primary_10_1007_s00521_023_08905_2
crossref_primary_10_1145_3603703
crossref_primary_10_1145_3491217
crossref_primary_10_3390_app12126154
crossref_primary_10_1145_3555802
crossref_primary_10_1007_s10586_024_04893_7
crossref_primary_10_1109_TVT_2024_3367657
crossref_primary_10_1109_COMST_2023_3338153
crossref_primary_10_1109_TNSM_2023_3271769
crossref_primary_10_1109_TITS_2021_3114295
crossref_primary_10_1049_cmu2_12334
crossref_primary_10_1109_TNSE_2022_3141728
crossref_primary_10_1109_ACCESS_2021_3082259
crossref_primary_10_1109_JIOT_2024_3374969
crossref_primary_10_1016_j_jnca_2023_103669
crossref_primary_10_1016_j_sysarc_2024_103139
crossref_primary_10_1109_TC_2021_3072072
crossref_primary_10_1109_JIOT_2020_3025365
crossref_primary_10_4018_IJDCF_332066
crossref_primary_10_1109_TNSM_2022_3210827
crossref_primary_10_1109_TC_2024_3416734
crossref_primary_10_1109_JIOT_2020_3016644
crossref_primary_10_1109_TR_2024_3399389
crossref_primary_10_1007_s10115_022_01746_w
crossref_primary_10_1109_TNSE_2021_3136942
crossref_primary_10_1109_OJCOMS_2023_3265425
crossref_primary_10_1109_TMC_2024_3376377
crossref_primary_10_1109_JIOT_2021_3091142
crossref_primary_10_1109_TNET_2021_3106937
crossref_primary_10_1007_s11276_021_02750_8
crossref_primary_10_1109_TC_2023_3343102
crossref_primary_10_1016_j_cosrev_2024_100656
crossref_primary_10_1109_ACCESS_2020_3025047
crossref_primary_10_1109_TMC_2024_3399766
crossref_primary_10_1109_JSAC_2023_3345433
crossref_primary_10_1016_j_compchemeng_2024_108601
crossref_primary_10_1109_TMC_2024_3437745
crossref_primary_10_1109_TPDS_2021_3119948
crossref_primary_10_3390_app12136566
crossref_primary_10_1109_LWC_2020_2989147
crossref_primary_10_1142_S0218126624502839
crossref_primary_10_1016_j_comnet_2022_109430
crossref_primary_10_1109_TC_2022_3169436
crossref_primary_10_1109_TMC_2024_3407958
crossref_primary_10_1109_TSUSC_2023_3240457
crossref_primary_10_1109_TMC_2023_3262233
crossref_primary_10_1109_JIOT_2023_3241222
Cites_doi 10.1109/TC.2016.2620469
10.1109/TMC.2017.2687918
10.1109/TPDS.2014.2316834
10.1109/GLOCOM.2017.8254503
10.1145/584007.584008
10.1109/TC.2018.2818144
10.1109/INFCOM.2012.6195685
10.1109/MNET.2019.1800544
10.1109/TWC.2016.2633522
10.1038/nature20101
10.1109/JPROC.2019.2922285
10.1109/TSIPN.2015.2448520
10.1145/2046614.2046619
10.1109/TETC.2017.2693286
10.1109/TMC.2018.2829874
10.1145/3072959.3073624
10.1109/JIOT.2020.2967772
10.1109/TMC.2018.2847337
10.1109/TVT.2018.2881191
10.1109/TMC.2018.2877623
10.1016/j.comnet.2017.03.015
10.1109/TVT.2017.2740724
10.1109/ICCChina.2015.7448613
10.1109/ISCC.2012.6249269
10.1145/2479942.2479946
10.1109/TNET.2015.2487344
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TC.2020.2969148
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1557-9956
EndPage 893
ExternalDocumentID 10_1109_TC_2020_2969148
8967118
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61872310
  funderid: 10.13039/501100001809
– fundername: JSPS Grants-in-Aid for Scientific Research
  grantid: JP19K20258
– fundername: General Research Fund of the Research Grants Council of Hong Kong
  grantid: PolyU 152221/19E
GroupedDBID --Z
-DZ
-~X
.DC
0R~
29I
4.4
5GY
6IK
85S
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
ACNCT
AENEX
AETEA
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
TWZ
UHB
UPT
XZL
YZZ
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c355t-f8e3adc3e028437c476aa081308d6b194a29348fb1df0c39fa013379f297e7043
IEDL.DBID RIE
ISSN 0018-9340
IngestDate Mon Jun 30 06:55:55 EDT 2025
Tue Jul 01 00:27:40 EDT 2025
Thu Apr 24 23:12:37 EDT 2025
Wed Aug 27 02:42:20 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 6
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c355t-f8e3adc3e028437c476aa081308d6b194a29348fb1df0c39fa013379f297e7043
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-4981-0496
0000-0001-9831-2202
PQID 2401132752
PQPubID 85452
PageCount 11
ParticipantIDs proquest_journals_2401132752
crossref_primary_10_1109_TC_2020_2969148
crossref_citationtrail_10_1109_TC_2020_2969148
ieee_primary_8967118
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-06-01
PublicationDateYYYYMMDD 2020-06-01
PublicationDate_xml – month: 06
  year: 2020
  text: 2020-06-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on computers
PublicationTitleAbbrev TC
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref34
ref15
heess (ref36) 2015
ref14
ref31
ref30
ref10
goodfellow (ref42) 2016
ref2
schulman (ref39) 2017
ref16
ref19
ref18
(ref11) 2013
schulman (ref38) 2015
lowe (ref33) 2017
li (ref40) 2018
ref24
ref23
ref26
osborne (ref32) 1994
ref25
ref20
bansal (ref41) 2017
hu (ref27) 2003; 4
ref22
ref21
ref29
hausknecht (ref35) 2015
ref8
sutton (ref37) 2000
ref7
sun (ref1) 2015
ref9
ref4
xian (ref17) 2007; 2
ref3
lillicrap (ref28) 2015
ref6
ref5
(ref12) 2016
References_xml – ident: ref5
  doi: 10.1109/TC.2016.2620469
– year: 2017
  ident: ref41
  article-title: Emergent complexity via multi-agent competition
– year: 1994
  ident: ref32
  publication-title: A Course in Game Theory
– year: 2017
  ident: ref39
  article-title: Proximal policy optimization algorithms
– ident: ref26
  doi: 10.1109/TMC.2017.2687918
– ident: ref7
  doi: 10.1109/TPDS.2014.2316834
– ident: ref31
  doi: 10.1109/GLOCOM.2017.8254503
– ident: ref15
  doi: 10.1145/584007.584008
– start-page: 1889
  year: 2015
  ident: ref38
  article-title: Trust region policy optimization
  publication-title: Proc 31st Int Conf Mach Learn
– year: 2013
  ident: ref11
  article-title: Increasing mobile operators value proposition with edge computing
  publication-title: Technical Brief
– ident: ref6
  doi: 10.1109/TC.2018.2818144
– year: 2018
  ident: ref40
  article-title: Distributional advantage actor-critic
  publication-title: CoRR
– start-page: 6379
  year: 2017
  ident: ref33
  article-title: Multi-agent actor-critic for mixed cooperative-competitive environments
  publication-title: Proc 31st Int Conf Neural Inf Process Syst
– ident: ref16
  doi: 10.1109/INFCOM.2012.6195685
– year: 2016
  ident: ref12
  article-title: Using mobile edge computing to improve mobile network performance and profitability
– ident: ref13
  doi: 10.1109/MNET.2019.1800544
– ident: ref21
  doi: 10.1109/TWC.2016.2633522
– start-page: 1057
  year: 2000
  ident: ref37
  article-title: Policy gradient methods for reinforcement learning with function approximation
  publication-title: Proc 12th Int Conf Neural Inf Process Syst
– ident: ref34
  doi: 10.1038/nature20101
– ident: ref4
  doi: 10.1109/JPROC.2019.2922285
– ident: ref20
  doi: 10.1109/TSIPN.2015.2448520
– volume: 2
  start-page: 1
  year: 2007
  ident: ref17
  article-title: Adaptive computation offloading for energy conservation on battery-powered systems
  publication-title: Proc Int Conf Parallel Distrib Syst
– ident: ref30
  doi: 10.1145/2046614.2046619
– year: 2015
  ident: ref36
  article-title: Memory-based control with recurrent neural networks
– ident: ref14
  doi: 10.1109/TETC.2017.2693286
– year: 2015
  ident: ref1
  article-title: DeepID3: Face recognition with very deep neural networks
– ident: ref9
  doi: 10.1109/TMC.2018.2829874
– ident: ref3
  doi: 10.1145/3072959.3073624
– ident: ref29
  doi: 10.1109/JIOT.2020.2967772
– ident: ref10
  doi: 10.1109/TMC.2018.2847337
– ident: ref24
  doi: 10.1109/TVT.2018.2881191
– ident: ref22
  doi: 10.1109/TMC.2018.2877623
– ident: ref25
  doi: 10.1016/j.comnet.2017.03.015
– year: 2015
  ident: ref28
  article-title: Continuous control with deep reinforcement learning
– ident: ref23
  doi: 10.1109/TVT.2017.2740724
– ident: ref19
  doi: 10.1109/ICCChina.2015.7448613
– start-page: 29
  year: 2015
  ident: ref35
  article-title: Deep recurrent Q-learning for partially observable MDPs
  publication-title: Proc AAAI Fall Symp
– ident: ref2
  doi: 10.1109/ISCC.2012.6249269
– ident: ref18
  doi: 10.1145/2479942.2479946
– volume: 4
  start-page: 1039
  year: 2003
  ident: ref27
  article-title: Nash Q-learning for general-sum stochastic games
  publication-title: J Mach Learn Res
– year: 2016
  ident: ref42
  publication-title: Deep Learning
– ident: ref8
  doi: 10.1109/TNET.2015.2487344
SSID ssj0006209
Score 2.6406572
Snippet Edge computing is a new paradigm to provide strong computing capability at the edge of pervasive radio access networks close to users. A critical research...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 883
SubjectTerms Algorithms
Computation offloading
Computational modeling
Computer simulation
Decision theory
Deep learning
deep reinforcement learning (DRL)
Edge computing
Game theory
Games
Markov processes
Nash equilibrium
partially observable Markov decision process (POMDP)
Reinforcement learning
Servers
Task analysis
Title A Deep Reinforcement Learning Based Offloading Game in Edge Computing
URI https://ieeexplore.ieee.org/document/8967118
https://www.proquest.com/docview/2401132752
Volume 69
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS-wwEB_Ukx78fA_XL3Lw4MHWtEmT5qjrqggqyAreSpom8lB3RXcv_vVO0nQRn4K3UjIh5GPmN8nMbwD2S6GzRkiWZD5DhmvFE0RFRVIwjfCUSUprn-B8dS0u7vjlfXE_B4ezXBhrbQg-s6n_DG_5zdhM_VXZUamEREA8D_O4zdpcrZnWFV04R4YHmHEaaXwyqo6GffQDc5rmSqjMF_r5ZIFCSZX_9HAwLmcrcNUNq40peUynkzo1718YG3877lVYjiiTHLfbYg3m7GgdVroKDiQe6HVY-kRHuAGDY3Jq7Qu5tYFO1YSbQxIZWB_ICRq8htw49zQOgffkXD9b8m9EBs2DJW3n-PsP3J0Nhv2LJJZZSAyCjUniSst0Y5hFqMGZNFwKrREpMFo2os4U1wgJeOnqrHHUMOU0wkYmlcuVtJJy9hcWRuOR3QQinMqlqnNXlJoLVBcG5VCF5IVhuXFND9Ju6isTOch9KYynKvgiVFXDfuXXqopr1YODmcBLS7_xc9MNP_OzZnHSe7DTrW0Vj-dbhTAmQzdcFvnW91LbsOj7bmPCdmBh8jq1u4g-JvVe2HYfgWPSMw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Bb9MwFH4a4wA7rLAyrWwDHzhwIJkTO3Z8HKWlwDok1Em7RY5jT4itnUZ72a_fs-NUFTCJWxTZiWX7vfc9-73vAbwrhc4aIVmS-QwZrhVPEBUVScE0wlMmKa19gvP0XEwu-NfL4nILPqxzYay1IfjMpv4x3OU3C7PyR2UnpRISAfETeIp2nxdtttZa74ouoCNDEWacRiKfjKqT2RA9wZymuRIq86V-NmxQKKrylyYO5mXcg2k3sDaq5Fe6Wtapuf-Ds_F_R_4CdiPOJKftxngJW3a-B72uhgOJIr0HOxuEhH0YnZJP1t6SHzYQqppwdkgiB-sV-YgmryHfnbtehNB78lnfWPJzTkbNlSXtx_H1K7gYj2bDSRILLSQG4cYycaVlujHMItjgTBouhdaIFRgtG1FnimsEBbx0ddY4aphyGoEjk8rlSlpJOduH7flibg-ACKdyqercFaXmAhWGwX6oRPLCsNy4ZgBpN_WViSzkvhjGdRW8Eaqq2bDya1XFtRrA-3WH25aA4_GmfT_z62Zx0gdw1K1tFQX0d4VAJkNHXBb563_3egvPJrPpWXX25fzbITz3_2kjxI5ge3m3sseIRZb1m7AFHwCwbtWA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Deep+Reinforcement+Learning+Based+Offloading+Game+in+Edge+Computing&rft.jtitle=IEEE+transactions+on+computers&rft.au=Zhan%2C+Yufeng&rft.au=Guo%2C+Song&rft.au=Li%2C+Peng&rft.au=Zhang%2C+Jiang&rft.date=2020-06-01&rft.pub=IEEE&rft.issn=0018-9340&rft.volume=69&rft.issue=6&rft.spage=883&rft.epage=893&rft_id=info:doi/10.1109%2FTC.2020.2969148&rft.externalDocID=8967118
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9340&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9340&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9340&client=summon