Energy-Efficient Mode Selection and Resource Allocation for D2D-Enabled Heterogeneous Networks: A Deep Reinforcement Learning Approach
Improving energy efficiency has shown increasing importance in designing future cellular system. In this work, we consider the issue of energy efficiency in D2D-enabled heterogeneous cellular networks. Specifically, communication mode selection and resource allocation are jointly considered with the...
Saved in:
Published in | IEEE transactions on wireless communications Vol. 20; no. 2; pp. 1175 - 1187 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Improving energy efficiency has shown increasing importance in designing future cellular system. In this work, we consider the issue of energy efficiency in D2D-enabled heterogeneous cellular networks. Specifically, communication mode selection and resource allocation are jointly considered with the aim to maximize the energy efficiency in the long term. And an Markov decision process (MDP) problem is formulated, where each user can switch between traditional cellular mode and D2D mode dynamically. We employ deep deterministic policy gradient (DDPG), a model-free deep reinforcement learning algorithm, to solve the MDP problem in continuous state and action space. The architecture of proposed method consists of one actor network and one critic network. The actor network uses deterministic policy gradient scheme to generate deterministic actions for agent directly, and the critic network employs value function based Q networks to evaluate the performance of the actor network. Simulation results show the convergence property of proposed algorithm and the effectiveness in improving the energy efficiency in a D2D-enabled heterogeneous network. |
---|---|
AbstractList | Improving energy efficiency has shown increasing importance in designing future cellular system. In this work, we consider the issue of energy efficiency in D2D-enabled heterogeneous cellular networks. Specifically, communication mode selection and resource allocation are jointly considered with the aim to maximize the energy efficiency in the long term. And an Markov decision process (MDP) problem is formulated, where each user can switch between traditional cellular mode and D2D mode dynamically. We employ deep deterministic policy gradient (DDPG), a model-free deep reinforcement learning algorithm, to solve the MDP problem in continuous state and action space. The architecture of proposed method consists of one actor network and one critic network. The actor network uses deterministic policy gradient scheme to generate deterministic actions for agent directly, and the critic network employs value function based Q networks to evaluate the performance of the actor network. Simulation results show the convergence property of proposed algorithm and the effectiveness in improving the energy efficiency in a D2D-enabled heterogeneous network. |
Author | Wang, Junhua Zhu, Kun Zhang, Tao |
Author_xml | – sequence: 1 givenname: Tao orcidid: 0000-0003-1830-788X surname: Zhang fullname: Zhang, Tao email: tao@nuaa.edu.cn organization: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China – sequence: 2 givenname: Kun orcidid: 0000-0001-6784-5583 surname: Zhu fullname: Zhu, Kun email: zhukun@nuaa.edu.cn organization: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China – sequence: 3 givenname: Junhua surname: Wang fullname: Wang, Junhua email: jhua1207@nuaa.edu.cn organization: College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China |
BookMark | eNp9kM1OGzEUha0KpPK3r9SNpa4n-G_sGXZRkgJSKBJE6nJ08VynpoOd2hNVvECfG4egLliwsmWf7x7d75gchBiQkC-cTThn7fnq52wimGATySRXUn8iR7yum0oI1Rzs7lJXXBj9mRzn_MgYN7quj8i_RcC0fq4WznnrMYz0JvZI73FAO_oYKISe3mGO22SRTochWnh9dzHRuZhXiwAPA_b0CkdMcY0B4zbTHzj-jel3vqBTOkfclBE-FMTi065jiZCCD2s63WxSBPvrlBw6GDKevZ0nZPV9sZpdVcvby-vZdFlZWZuxchqUASVbwdre2p5r0wKTBmpQ2oHC3jwI17C6CNC9YI0RzgCAaRCUQnlCvu3HltY_W8xj91gWC6WxK5oao5SpeUnpfcqmmHNC11k_vm49JvBDx1m3U94V5d1OefemvIDsHbhJ_gnS80fI1z3iEfF_vBXSlG_5AnCOjus |
CODEN | ITWCAX |
CitedBy_id | crossref_primary_10_1007_s11277_023_10307_5 crossref_primary_10_1109_ACCESS_2024_3460656 crossref_primary_10_1109_JIOT_2024_3421616 crossref_primary_10_1109_TWC_2023_3255216 crossref_primary_10_1109_TNSE_2023_3255544 crossref_primary_10_1007_s11277_024_11255_4 crossref_primary_10_1109_COMST_2024_3405075 crossref_primary_10_3390_s23156796 crossref_primary_10_1007_s11276_023_03230_x crossref_primary_10_1109_TVT_2023_3267452 crossref_primary_10_1109_TII_2022_3227655 crossref_primary_10_1109_TWC_2024_3509475 crossref_primary_10_1109_TWC_2023_3271673 crossref_primary_10_1109_TWC_2023_3244192 crossref_primary_10_1109_ACCESS_2024_3434619 crossref_primary_10_1109_TWC_2023_3314701 crossref_primary_10_1109_ACCESS_2023_3302250 crossref_primary_10_1016_j_comnet_2023_109912 crossref_primary_10_1109_TVT_2023_3283306 crossref_primary_10_1002_dac_6060 crossref_primary_10_1109_TMC_2021_3085206 crossref_primary_10_1109_JSYST_2022_3179351 crossref_primary_10_1109_TWC_2024_3483291 crossref_primary_10_1109_MNET_122_2200102 crossref_primary_10_1109_JIOT_2022_3160197 crossref_primary_10_1016_j_heliyon_2024_e30697 crossref_primary_10_3390_s24165141 crossref_primary_10_1109_JIOT_2024_3406044 crossref_primary_10_1109_TCCN_2022_3198652 crossref_primary_10_1109_LWC_2022_3170998 crossref_primary_10_3390_electronics12030647 crossref_primary_10_1109_ACCESS_2023_3238799 crossref_primary_10_3390_electronics12020360 crossref_primary_10_1016_j_phycom_2024_102423 crossref_primary_10_1016_j_comnet_2023_109823 crossref_primary_10_1109_ACCESS_2021_3129465 crossref_primary_10_1016_j_adhoc_2025_103788 crossref_primary_10_1002_dac_6092 crossref_primary_10_1016_j_adhoc_2022_102978 crossref_primary_10_1109_LWC_2021_3120287 crossref_primary_10_1109_TVT_2024_3392738 crossref_primary_10_1109_TVT_2024_3425459 crossref_primary_10_1109_JSYST_2022_3145398 crossref_primary_10_1109_TNSM_2024_3482549 crossref_primary_10_1109_TMLCN_2024_3369007 crossref_primary_10_1186_s13638_024_02339_7 crossref_primary_10_3389_fnbot_2023_1093132 crossref_primary_10_1109_COMST_2021_3130901 crossref_primary_10_1109_TNSE_2023_3346445 crossref_primary_10_1007_s11276_022_03176_6 crossref_primary_10_1109_JIOT_2023_3333826 crossref_primary_10_1007_s11277_024_11420_9 crossref_primary_10_1109_JIOT_2024_3487913 crossref_primary_10_1109_LCOMM_2021_3079920 crossref_primary_10_1109_TII_2023_3315744 crossref_primary_10_1109_TVT_2023_3327571 crossref_primary_10_3390_math11071702 crossref_primary_10_1109_TVT_2023_3267660 crossref_primary_10_1109_JIOT_2021_3106283 crossref_primary_10_1109_ACCESS_2024_3349944 crossref_primary_10_1109_TVT_2023_3276647 |
Cites_doi | 10.1109/LCOMM.2019.2907252 10.1109/PIMRC.2015.7343537 10.1109/TGCN.2018.2844301 10.1109/WCNC.2019.8885887 10.1007/3-540-49430-8_2 10.1109/TENCON.2018.8650160 10.1049/iet-com.2018.6028 10.1016/j.automatica.2010.02.018 10.1109/IWCMC.2018.8450467 10.1109/TMC.2018.2871073 10.1109/TWC.2017.2769644 10.1109/TWC.2019.2933417 10.1109/COMST.2019.2904897 10.1109/TCOMM.2016.2580153 10.1109/ICCW.2018.8403676 10.1007/s11276-020-02261-y 10.1109/ICCSPA.2019.8713700 10.1049/iet-com.2019.0466 10.1109/TVT.2017.2760281 10.1109/WCNC.2010.5506248 10.1109/VTCFall.2018.8690728 10.1007/s11235-017-0320-5 10.1109/TVT.2017.2731798 10.1109/TCYB.2016.2542923 10.1109/ICC.2015.7248660 10.1109/LWC.2019.2917907 10.1109/PIMRC.2017.8292468 10.1109/TVT.2019.2916395 10.1109/TVT.2014.2311580 10.1109/ACCESS.2019.2956111 10.1109/TVT.2014.2362005 10.1109/TNN.1998.712192 10.1109/ACCESS.2019.2944403 10.3390/fi10010003 10.1109/LWC.2019.2916352 10.1016/j.engappai.2013.06.016 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TWC.2020.3031436 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2248 |
EndPage | 1187 |
ExternalDocumentID | 10_1109_TWC_2020_3031436 9237143 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61701230; 62071230; 62002166 funderid: 10.13039/501100001809 – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20170805 funderid: 10.13039/501100004608 – fundername: China Postdoctoral Science Foundation grantid: 2020M671483 funderid: 10.13039/501100002858 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AIBXA AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 IES IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c357t-f6a47a439209dccd1679a037a5a46fa4ed7b2f8054366d20872f7aaa78ea44e3 |
IEDL.DBID | RIE |
ISSN | 1536-1276 |
IngestDate | Fri Jul 25 12:24:29 EDT 2025 Thu Apr 24 23:10:44 EDT 2025 Tue Jul 01 04:13:28 EDT 2025 Wed Aug 27 05:44:44 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c357t-f6a47a439209dccd1679a037a5a46fa4ed7b2f8054366d20872f7aaa78ea44e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-6784-5583 0000-0003-1830-788X |
PQID | 2488744751 |
PQPubID | 105736 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2488744751 ieee_primary_9237143 crossref_citationtrail_10_1109_TWC_2020_3031436 crossref_primary_10_1109_TWC_2020_3031436 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-Feb. 2021-2-00 20210201 |
PublicationDateYYYYMMDD | 2021-02-01 |
PublicationDate_xml | – month: 02 year: 2021 text: 2021-Feb. |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on wireless communications |
PublicationTitleAbbrev | TWC |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 ref34 ref12 ref15 ref14 ref31 ref30 ref11 ref32 ref10 wang (ref48) 2015 ref2 ref1 lecun (ref46) 1998 ref39 zhao (ref38) 2011 ref17 ref16 ref19 ref18 sutton (ref33) 2000; 12 silver (ref40) 2014 lillicrap (ref41) 2015 ref24 ref23 (ref43) 2014 ref26 ref25 ref42 ref44 van hasselt (ref47) 2016 ref21 (ref20) 2019 mnih (ref36) 2013 ref28 ref27 ref29 (ref45) 2010 ref8 ref7 ref9 ref4 ref3 ref6 chen (ref22) 2019 ref5 nachum (ref37) 2017 |
References_xml | – ident: ref2 doi: 10.1109/LCOMM.2019.2907252 – ident: ref44 doi: 10.1109/PIMRC.2015.7343537 – year: 2015 ident: ref48 article-title: Dueling network architectures for deep reinforcement learning publication-title: arXiv 1511 06581 – ident: ref26 doi: 10.1109/TGCN.2018.2844301 – ident: ref16 doi: 10.1109/WCNC.2019.8885887 – year: 1998 ident: ref46 article-title: Efficient backprop publication-title: Neural Networks Tricks of the Trade doi: 10.1007/3-540-49430-8_2 – year: 2014 ident: ref43 publication-title: Study on LTE Device to Device Proximity Services Radio Aspects – ident: ref24 doi: 10.1109/TENCON.2018.8650160 – ident: ref23 doi: 10.1049/iet-com.2018.6028 – start-page: 387 year: 2014 ident: ref40 article-title: Deterministic policy gradient algorithms publication-title: Int Conf Mach Learn – ident: ref39 doi: 10.1016/j.automatica.2010.02.018 – ident: ref6 doi: 10.1109/IWCMC.2018.8450467 – ident: ref5 doi: 10.1109/TMC.2018.2871073 – ident: ref27 doi: 10.1109/TWC.2017.2769644 – start-page: 2775 year: 2017 ident: ref37 article-title: Bridging the gap between value and policy based reinforcement learning publication-title: Proc Adv Neural Inf Process Syst – ident: ref28 doi: 10.1109/TWC.2019.2933417 – ident: ref42 doi: 10.1109/COMST.2019.2904897 – ident: ref29 doi: 10.1109/TCOMM.2016.2580153 – ident: ref14 doi: 10.1109/ICCW.2018.8403676 – ident: ref19 doi: 10.1007/s11276-020-02261-y – start-page: 2094 year: 2016 ident: ref47 article-title: Deep reinforcement learning with double Q-learning publication-title: Proc 30th AAAI Conf Artificial Intell – ident: ref9 doi: 10.1109/ICCSPA.2019.8713700 – year: 2015 ident: ref41 article-title: Continuous control with deep reinforcement learning publication-title: arXiv 1509 02971 – ident: ref18 doi: 10.1049/iet-com.2019.0466 – ident: ref21 doi: 10.1109/TVT.2017.2760281 – start-page: 262 year: 2011 ident: ref38 article-title: Analysis and improvement of policy gradient estimation publication-title: Proc Adv Neural Inf Process Syst – ident: ref30 doi: 10.1109/WCNC.2010.5506248 – ident: ref15 doi: 10.1109/VTCFall.2018.8690728 – start-page: 123 year: 2019 ident: ref20 article-title: SWIPT techniques in multi-tier D2D networks for energy efficiency publication-title: Proc TENCON - IEEE Region 10 Conf (TENCON) – ident: ref7 doi: 10.1007/s11235-017-0320-5 – ident: ref1 doi: 10.1109/TVT.2017.2731798 – ident: ref34 doi: 10.1109/TCYB.2016.2542923 – ident: ref31 doi: 10.1109/ICC.2015.7248660 – year: 2013 ident: ref36 article-title: Playing atari with deep reinforcement learning publication-title: arXiv 1312 5602 – ident: ref3 doi: 10.1109/LWC.2019.2917907 – ident: ref25 doi: 10.1109/PIMRC.2017.8292468 – ident: ref8 doi: 10.1109/TVT.2019.2916395 – year: 2010 ident: ref45 publication-title: Evolved Universal Terrestrial Radio Access (E-UTRA) Further advancements for E-UTRA physical layer aspects – ident: ref11 doi: 10.1109/TVT.2014.2311580 – ident: ref17 doi: 10.1109/ACCESS.2019.2956111 – ident: ref12 doi: 10.1109/TVT.2014.2362005 – ident: ref32 doi: 10.1109/TNN.1998.712192 – ident: ref10 doi: 10.1109/ACCESS.2019.2944403 – ident: ref13 doi: 10.3390/fi10010003 – start-page: 146 year: 2019 ident: ref22 article-title: A reinforcement learning based joint spectrum allocation and power control algorithm for D2D communication underlaying cellular networks publication-title: Proc Int Conf Artif Intell Commun Netw – volume: 12 start-page: 1057 year: 2000 ident: ref33 article-title: Policy gradient methods for reinforcement learning with function approximation publication-title: Proc Adv Neural Inf Process Syst – ident: ref4 doi: 10.1109/LWC.2019.2916352 – ident: ref35 doi: 10.1016/j.engappai.2013.06.016 |
SSID | ssj0017655 |
Score | 2.5769267 |
Snippet | Improving energy efficiency has shown increasing importance in designing future cellular system. In this work, we consider the issue of energy efficiency in... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1175 |
SubjectTerms | Algorithms Base stations Cellular communication deep deterministic policy gradient Deep learning device-to-device (D2D) communication Device-to-device communication Energy conversion efficiency Energy efficiency Heterogeneous networks Machine learning Markov processes Modal choice mode selection Networks Optimization Power control Resource allocation Resource management Wireless communication |
Title | Energy-Efficient Mode Selection and Resource Allocation for D2D-Enabled Heterogeneous Networks: A Deep Reinforcement Learning Approach |
URI | https://ieeexplore.ieee.org/document/9237143 https://www.proquest.com/docview/2488744751 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NTxsxEB0BJ3rgoxQRoJUPvSDVifHu2mtuEQmKkOBCULmtHHvMAZQgklz4Afxuxt7NUtEK9bZarVeW3tie8cy8B_BTGoForOBBe-T5RAdeKpvxzJRZUMEQMLHf-epajW7zy7vibg1-tb0wiJiKz7AbH1Mu38_cMl6V9cgZiXLd67BOgVvdq9VmDLRKCqe0gKOujG5TksL0xr_PKRCUFJ9GrvZExvx-BCVNlb824nS6XGzD1WpedVHJQ3e5mHTdywfKxv-d-A5sNW4m69d2sQtrOP0KX_4gH9yD12Fq--PDRCJB41nURWM3SRiH0GJ26tnqdp_1H-Opl96Tm8sGcsCHqe3Ks1GsqJmRIeJsOWfXdV35_Iz12QDxiX6RyFlduodkDZ_rPes3ZObfYHwxHJ-PeKPKwF1W6AUPyubakh8jhfHO-ZjHsSLTtrC5CjZHrycylOQKZkp5KUotg7bW6hJtnmO2DxvT2RQPgIVSoQjilOL5mI90pnAFuQw6eGtMKGQHeiucKtcwlkfhjMcqRS7CVIRsFZGtGmQ7cNKOeKrZOj75di8C1X7XYNSB45UpVM1ynleStjkdqRFPD_896gg2ZSx2SeXcx7CxeF7id_JWFpMfyUzfAKxK5pE |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3BbhMxEB2V9gAcaEupCBTwoRcknDjeXXvNLWpSBWhyIYjeVo497qFVUjXJpR_AdzP2brYIEOK2Wq1Xlt7YM_bMvAdwKo1ANFbwoD3yfK4DL5XNeGbKLKhgCJjY7zyZqvG3_PNlcbkDH9peGERMxWfYjY8pl--XbhOvynoUjES57kewR36_6NfdWm3OQKukcUpLOCrL6DYpKUxv9v2MjoKSTqiRrT3RMT84oaSq8sdWnPzL-T5MtjOry0quu5v1vOvufyNt_N-pH8CzJtBkg9oyDmEHF8_h6S_0g0fwY5Qa__go0UjQeBaV0djXJI1DeDG78Gx7v88GN9HvpfcU6LKhHPJRarzybBxrapZkirjcrNi0rixffWQDNkS8pV8kelaXbiJZw-h6xQYNnfkLmJ2PZmdj3ugycJcVes2Dsrm2FMlIYbxzPmZyrMi0LWyugs3R67kMJQWDmVJeilLLoK21ukSb55gdw-5iucCXwEKpUATRpxN9zEg6U7iCggYdvDUmFLIDvS1OlWs4y6N0xk2Vzi7CVIRsFZGtGmQ78L4dcVvzdfzj26MIVPtdg1EHTramUDULelVJ2uh0JEfsv_r7qHfweDybXFQXn6ZfXsMTGUtfUnH3Ceyu7zb4hmKX9fxtMtmf2YHp2g |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Energy-Efficient+Mode+Selection+and+Resource+Allocation+for+D2D-Enabled+Heterogeneous+Networks%3A+A+Deep+Reinforcement+Learning+Approach&rft.jtitle=IEEE+transactions+on+wireless+communications&rft.au=Zhang%2C+Tao&rft.au=Zhu%2C+Kun&rft.au=Wang%2C+Junhua&rft.date=2021-02-01&rft.issn=1536-1276&rft.eissn=1558-2248&rft.volume=20&rft.issue=2&rft.spage=1175&rft.epage=1187&rft_id=info:doi/10.1109%2FTWC.2020.3031436&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TWC_2020_3031436 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1536-1276&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1536-1276&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1536-1276&client=summon |