Joint Graph Attention and Asymmetric Convolutional Neural Network for Deep Image Compression
Recent deep image compression methods have achieved prominent progress by using nonlinear modeling and powerful representation capabilities of neural networks. However, most existing learning-based image compression approaches employ customized convolutional neural network (CNN) to utilize visual fe...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 33; no. 1; pp. 421 - 433 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Recent deep image compression methods have achieved prominent progress by using nonlinear modeling and powerful representation capabilities of neural networks. However, most existing learning-based image compression approaches employ customized convolutional neural network (CNN) to utilize visual features by treating all pixels equally, neglecting the effect of local key features. Meanwhile, the convolutional filters in CNN usually express the local spatial relationship within the receptive field and seldom consider the long-range dependencies from distant locations. This results in the long-range dependencies of latent representations not being fully compressed. To address these issues, an end-to-end image compression method is proposed by integrating graph attention and asymmetric convolutional neural network (ACNN). Specifically, ACNN is used to strengthen the effect of local key features and reduce the cost of model training. Graph attention is introduced into image compression to address the bottleneck problem of CNN in modeling long-range dependencies. Meanwhile, regarding the limitation that existing attention mechanisms for image compression hardly share information, we propose a self-attention approach which allows information flow to achieve reasonable bit allocation. The proposed self-attention approach is in compliance with the perceptual characteristics of human visual system, as information can interact with each other via attention modules. Moreover, the proposed self-attention approach takes into account channel-level relationship and positional information to promote the compression effect of rich-texture regions. Experimental results demonstrate that the proposed method achieves state-of-the-art rate-distortion performances after being optimized by MS-SSIM compared to recent deep compression models on the benchmark datasets of Kodak and Tecnick. The project page with the source code can be found in https://mic.tongji.edu.cn . |
---|---|
AbstractList | Recent deep image compression methods have achieved prominent progress by using nonlinear modeling and powerful representation capabilities of neural networks. However, most existing learning-based image compression approaches employ customized convolutional neural network (CNN) to utilize visual features by treating all pixels equally, neglecting the effect of local key features. Meanwhile, the convolutional filters in CNN usually express the local spatial relationship within the receptive field and seldom consider the long-range dependencies from distant locations. This results in the long-range dependencies of latent representations not being fully compressed. To address these issues, an end-to-end image compression method is proposed by integrating graph attention and asymmetric convolutional neural network (ACNN). Specifically, ACNN is used to strengthen the effect of local key features and reduce the cost of model training. Graph attention is introduced into image compression to address the bottleneck problem of CNN in modeling long-range dependencies. Meanwhile, regarding the limitation that existing attention mechanisms for image compression hardly share information, we propose a self-attention approach which allows information flow to achieve reasonable bit allocation. The proposed self-attention approach is in compliance with the perceptual characteristics of human visual system, as information can interact with each other via attention modules. Moreover, the proposed self-attention approach takes into account channel-level relationship and positional information to promote the compression effect of rich-texture regions. Experimental results demonstrate that the proposed method achieves state-of-the-art rate-distortion performances after being optimized by MS-SSIM compared to recent deep compression models on the benchmark datasets of Kodak and Tecnick. The project page with the source code can be found in https://mic.tongji.edu.cn . |
Author | Tang, Zhisen Wang, Hanli Kuo, C.-C. Jay Kwong, Sam Yi, Xiaokai Zhang, Yun |
Author_xml | – sequence: 1 givenname: Zhisen surname: Tang fullname: Tang, Zhisen email: zhisentang@tongji.edu.cn organization: Department of Computer Science and Technology, Tongji University, Shanghai, China – sequence: 2 givenname: Hanli orcidid: 0000-0002-9999-4871 surname: Wang fullname: Wang, Hanli email: hanliwang@tongji.edu.cn organization: Department of Computer Science and Technology, Tongji University, Shanghai, China – sequence: 3 givenname: Xiaokai surname: Yi fullname: Yi, Xiaokai email: xkyi@tongji.edu.cn organization: Department of Computer Science and Technology, Tongji University, Shanghai, China – sequence: 4 givenname: Yun orcidid: 0000-0001-9457-7801 surname: Zhang fullname: Zhang, Yun email: yun.zhang@siat.ac.cn organization: Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China – sequence: 5 givenname: Sam orcidid: 0000-0001-7484-7261 surname: Kwong fullname: Kwong, Sam email: cssamk@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong, China – sequence: 6 givenname: C.-C. Jay orcidid: 0000-0001-9474-5035 surname: Kuo fullname: Kuo, C.-C. Jay email: cckuo@sipi.usc.edu organization: Ming-Hsieh Department of Electrical Engineering, Signal and Image Processing Institute, University of Southern California, Los Angeles, CA, USA |
BookMark | eNo9kEtPwzAQhC1UJNrCH4CLJc4pfsb2sQpQiio4UDghRW6ygZQmDnYC6r8nfYjT7GpmVqtvhAa1qwGhS0omlBJzs0xe3pYTRhibcGqMUOwEDamUOmKMyEE_E0kjzag8Q6MQ1oRQoYUaovdHV9YtnnnbfOJp20Ldlq7Gts7xNGyrClpfZjhx9Y_bdDvLbvATdH4v7a_zX7hwHt8CNHhe2Q_os1XjIYQ-e45OC7sJcHHUMXq9v1smD9HieTZPposoY0a2kVQml0oQlq8yQaFQMteGrxThsbCWQVFoyYVg_SrzTGoe97aydJWBjDNB-BhdH-423n13ENp07TrfvxpSpmLKldBc9il2SGXeheChSBtfVtZvU0rSHcV0TzHdUUyPFPvS1aFUAsB_wWiptTH8D25xb-Q |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TCSVT_2023_3237274 crossref_primary_10_1016_j_dcan_2023_09_001 crossref_primary_10_1109_TCSVT_2023_3300316 crossref_primary_10_1109_TCSVT_2023_3332911 crossref_primary_10_1117_1_JEI_33_3_033028 crossref_primary_10_1145_3652148 crossref_primary_10_1109_TCSVT_2024_3360248 crossref_primary_10_1109_TGRS_2023_3314012 crossref_primary_10_1109_TGRS_2024_3395708 crossref_primary_10_1016_j_engappai_2024_108243 crossref_primary_10_1109_TCSVT_2023_3307438 crossref_primary_10_1109_TMM_2024_3366765 crossref_primary_10_1007_s13042_023_01822_9 crossref_primary_10_1109_TCSVT_2023_3262251 |
Cites_doi | 10.1109/TIP.2016.2639451 10.1109/CVPR.2016.308 10.1109/ICCV.2019.00200 10.1109/TCSVT.2020.2965055 10.1109/TCSVT.2020.3010627 10.1109/79.952804 10.1109/TSMC.2021.3069265 10.1109/TIP.2020.2985225 10.1109/79.733497 10.1109/DCC52660.2022.00028 10.1109/TCSVT.2003.815165 10.24963/ijcai.2020/65 10.1109/TCSVT.2022.3169951 10.1109/ACSSC.2003.1292216 10.1109/CVPR46437.2021.01350 10.1109/CVPR.2017.577 10.1109/TCSVT.2021.3062402 10.1145/3338533.3366558 10.1109/TCSVT.2021.3089491 10.1109/CVPR.2018.00813 10.1109/TIT.1981.1056282 10.1109/TPAMI.2020.2983926 10.1145/1390156.1390294 10.1109/TCSVT.2020.3048945 10.1109/TCSVT.2021.3119660 10.1109/TPAMI.2019.2928540 10.1109/TPAMI.2021.3050918 10.1109/TCSVT.2012.2221191 10.1109/DCC52660.2022.00080 10.1145/103085.103089 10.1146/annurev.neuro.23.1.315 10.1109/TCSVT.2022.3144186 10.1109/CVPR42600.2020.00796 10.1109/TCSVT.2020.3035680 10.1109/TIP.2021.3058615 10.1109/CVPR.2018.00461 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3199472 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library Online CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 433 |
ExternalDocumentID | 10_1109_TCSVT_2022_3199472 9858899 |
Genre | orig-research |
GrantInformation_xml | – fundername: Shanghai Municipal Science and Technology Major Project grantid: 2021SHZDZX0100 – fundername: Hong Kong GRF-RGC General Research Fund grantid: 11209819 (CityU 9042816); 11203820 (CityU 9042598) – fundername: Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA) funderid: 10.13039/501100003452 – fundername: Shanghai Innovation Action Project of Science and Technology grantid: 20511100700 – fundername: National Natural Science Foundation of China grantid: 61976159 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AASAJ ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AI. AIBXA AKJIK ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RIG RNS RXW TAE TN5 VH1 XFK AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-579d57402dbc41ef75d893b70364aa2eff8534423645dc58368937a1bce56c403 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Fri Sep 13 08:15:14 EDT 2024 Fri Aug 23 00:10:22 EDT 2024 Wed Jun 26 19:25:49 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-579d57402dbc41ef75d893b70364aa2eff8534423645dc58368937a1bce56c403 |
ORCID | 0000-0001-7484-7261 0000-0001-9457-7801 0000-0002-9999-4871 0000-0001-9474-5035 |
PQID | 2761374835 |
PQPubID | 85433 |
PageCount | 13 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2022_3199472 ieee_primary_9858899 proquest_journals_2761374835 |
PublicationCentury | 2000 |
PublicationDate | 2023-Jan. 2023-1-00 20230101 |
PublicationDateYYYYMMDD | 2023-01-01 |
PublicationDate_xml | – month: 01 year: 2023 text: 2023-Jan. |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 ref14 (ref45) 1999 ref11 ref17 ref16 ref18 asuni (ref46) 2014 lee (ref37) 2019 ref50 qian (ref48) 2022 ref41 ref43 ref49 ref8 ballé (ref19) 2016 ref7 hu (ref31) 2022; 44 ref3 liu (ref54) 2021 minnen (ref10) 2018 ref6 ref5 krizhevsky (ref42) 2012 ref40 velickovic (ref33) 2018 ref35 ref34 kingma (ref44) 2015 ref36 ref2 ballé (ref9) 2018 ref1 ref39 ref38 woo (ref53) 2018 bjøntegaard (ref52) 2001 kipf (ref32) 2017 wiegand (ref4) 2003; 13 kastner (ref30) 2000; 23 ref24 ref23 ref26 ref25 ref20 bégaint (ref51) 2020 ref22 ref21 liu (ref27) 2020 ref28 ref29 (ref47) 2019 |
References_xml | – ident: ref8 doi: 10.1109/TIP.2016.2639451 – ident: ref24 doi: 10.1109/CVPR.2016.308 – ident: ref26 doi: 10.1109/ICCV.2019.00200 – start-page: 1 year: 2015 ident: ref44 article-title: Adam: A method for stochastic optimization publication-title: Proc ICLR contributor: fullname: kingma – ident: ref41 doi: 10.1109/TCSVT.2020.2965055 – start-page: 1 year: 2018 ident: ref9 article-title: Variational image compression with a scale hyperprior publication-title: Proc ICLR contributor: fullname: ballé – ident: ref14 doi: 10.1109/TCSVT.2020.3010627 – ident: ref2 doi: 10.1109/79.952804 – start-page: 1 year: 2016 ident: ref19 article-title: Density modeling of images using a generalized normalization transformation publication-title: Proc ICLR contributor: fullname: ballé – ident: ref23 doi: 10.1109/TSMC.2021.3069265 – ident: ref38 doi: 10.1109/TIP.2020.2985225 – start-page: 1 year: 2017 ident: ref32 article-title: Semi-supervised classification with graph convolutional networks publication-title: Proc ICLR contributor: fullname: kipf – year: 2001 ident: ref52 publication-title: Calculation of Average PSNR Differences Between RD-Curves contributor: fullname: bjøntegaard – year: 2019 ident: ref47 publication-title: VVC official test model VTM – start-page: 1 year: 2022 ident: ref48 article-title: Entroformer: A transformer-based entropy model for learned image compression publication-title: Proc ICLR contributor: fullname: qian – year: 2020 ident: ref51 article-title: CompressAI: A PyTorch library and evaluation platform for end-to-end compression research publication-title: arXiv 2011 03029 contributor: fullname: bégaint – ident: ref15 doi: 10.1109/79.733497 – year: 1999 ident: ref45 publication-title: Kodak lossless true color image suite (PhotoCD PCD0992) – ident: ref50 doi: 10.1109/DCC52660.2022.00028 – volume: 13 start-page: 560 year: 2003 ident: ref4 article-title: overview of the h.264/avc video coding standard publication-title: IEEE Transactions on Circuits and Systems for Video Technology doi: 10.1109/TCSVT.2003.815165 contributor: fullname: wiegand – ident: ref20 doi: 10.24963/ijcai.2020/65 – ident: ref17 doi: 10.1109/TCSVT.2022.3169951 – ident: ref16 doi: 10.1109/ACSSC.2003.1292216 – ident: ref43 doi: 10.1109/CVPR46437.2021.01350 – ident: ref18 doi: 10.1109/CVPR.2017.577 – ident: ref5 doi: 10.1109/TCSVT.2021.3062402 – ident: ref25 doi: 10.1145/3338533.3366558 – ident: ref36 doi: 10.1109/TCSVT.2021.3089491 – volume: 44 start-page: 4194 year: 2022 ident: ref31 article-title: Learning end-to-end lossy image compression: A benchmark publication-title: IEEE Trans Pattern Anal Mach Intell contributor: fullname: hu – ident: ref22 doi: 10.1109/CVPR.2018.00813 – start-page: 1 year: 2019 ident: ref37 article-title: Context-adaptive entropy model for end-to-end optimized image compression publication-title: Proc ICLR contributor: fullname: lee – ident: ref35 doi: 10.1109/TIT.1981.1056282 – ident: ref21 doi: 10.1109/TPAMI.2020.2983926 – start-page: 1 year: 2018 ident: ref33 article-title: Graph attention networks publication-title: Proc ICLR contributor: fullname: velickovic – ident: ref11 doi: 10.1145/1390156.1390294 – ident: ref7 doi: 10.1109/TCSVT.2020.3048945 – ident: ref39 doi: 10.1109/TCSVT.2021.3119660 – ident: ref28 doi: 10.1109/TPAMI.2019.2928540 – ident: ref29 doi: 10.1109/TPAMI.2021.3050918 – start-page: 1097 year: 2012 ident: ref42 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst (NIPS) contributor: fullname: krizhevsky – ident: ref3 doi: 10.1109/TCSVT.2012.2221191 – year: 2020 ident: ref27 article-title: A unified end-to-end framework for efficient deep image compression publication-title: arXiv 2002 03370 contributor: fullname: liu – ident: ref49 doi: 10.1109/DCC52660.2022.00080 – start-page: 10794 year: 2018 ident: ref10 article-title: Joint autoregressive and hierarchical priors for learned image compression publication-title: Proc Int Conf Neural Inf Process contributor: fullname: minnen – year: 2021 ident: ref54 article-title: Polarized self-attention: Towards high-quality pixel-wise regression publication-title: arXiv 2107 00782 contributor: fullname: liu – ident: ref1 doi: 10.1145/103085.103089 – start-page: 63 year: 2014 ident: ref46 article-title: TESTIMAGES: A large-scale archive for testing visual devices and basic image processing algorithms publication-title: Proc STAG contributor: fullname: asuni – volume: 23 start-page: 315 year: 2000 ident: ref30 article-title: Mechanisms of visual attention in the human cortex publication-title: Annu Rev Neurosci doi: 10.1146/annurev.neuro.23.1.315 contributor: fullname: kastner – ident: ref6 doi: 10.1109/TCSVT.2022.3144186 – ident: ref13 doi: 10.1109/CVPR42600.2020.00796 – ident: ref40 doi: 10.1109/TCSVT.2020.3035680 – ident: ref12 doi: 10.1109/TIP.2021.3058615 – ident: ref34 doi: 10.1109/CVPR.2018.00461 – start-page: 3 year: 2018 ident: ref53 article-title: CBAM: Convolutional block attention module publication-title: Proc Eur Conf Comput Vis (ECCV) contributor: fullname: woo |
SSID | ssj0014847 |
Score | 2.51817 |
Snippet | Recent deep image compression methods have achieved prominent progress by using nonlinear modeling and powerful representation capabilities of neural networks.... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Publisher |
StartPage | 421 |
SubjectTerms | Artificial neural networks asymmetric convolutional neural network Asymmetry Convolution Convolutional neural networks graph attention network Image coding Image compression Information flow Kernel Modelling Neural networks Rate-distortion Representations self-attention Source code Training variational autoencoder Visualization |
Title | Joint Graph Attention and Asymmetric Convolutional Neural Network for Deep Image Compression |
URI | https://ieeexplore.ieee.org/document/9858899 https://www.proquest.com/docview/2761374835/abstract/ |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED61TDDwKojykgc2SBs7dm2PVaEUJFhoUQekKLEdCaGmFU2R4NdjO2nFa2BKhiSyfJe778533wGcZRxn3EQqUB2R2QAlI0GiQxVgwZViilmVcqmBu_vOYERvx2xcg4tVL4wxxhefmZa79Wf5eqoWLlXWloIJGx_UoS5CUvZqrU4MqPDDxCxcwIGwfmzZIBPK9rD38Di0oSAhNkKVknLyzQn5qSq_TLH3L_0tuFuurCwreWktirSlPn6QNv536duwWQFN1C01YwdqJt-FjS_0gw14up0-5wW6dpzVqFsUZeUjSnKNuvP3ycQN21KoN83fKv2033NkHv7iq8eRhbzo0pgZuplYw4ScdSkLa_M9GPWvhr1BUE1bCBSRrAgYl5pxG07qVFFsMs60xTKpI-iiSUJMllnPTqkjnGdaMRF1HNRJcKoM6ygaRvuwlk9zcwCIJNxNMQ4jHTGqCU5lKAxLsZIZJmmIm3C-3P54VpJqxD4YCWXshRU7YcWVsJrQcPu5erLayiYcLyUWV__dPCbcwhNOLaw8_PutI1h3A-PLJMoxrBWvC3NiYUWRnnp9-gSWnsk7 |
link.rule.ids | 315,786,790,802,27957,27958,55109 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9swDCba7NDusK4vLF266dBb69SSpcg6Bmm7pGtyaVL0UMCwJRkoijjB4gzofv0o2Qn62GEn-2AbgkiTHynyI8BJLmkubaQD3YlzDFByFqQm1AGNpdZCC1QplxoYjjr9Cb--F_cbcLbuhbHW-uIz23a3_izfzPTSpcrOVSxijA824QP6-VBV3VrrMwMe-3FiCBhoEKMnW7XIhOp83Lu9G2MwyBjGqEpxyV65IT9X5Z0x9h7mageGq7VVhSVP7WWZtfWfN7SN_7v4z_CphpqkW-nGLmzYYg8-viAg3IeH69ljUZIfjrWadMuyqn0kaWFId_E8nbpxW5r0ZsXvWkPxe47Ow198_ThB0EsurJ2TwRRNE3H2pSqtLQ5gcnU57vWDet5CoJkSZSCkMkJiQGkyzanNpTCIZjJH0cXTlNk8R9_OuaOcF0aLOOo4sJPSTFvR0TyMDqFRzAr7BQhLpZtjHEYmEtwwmqkwtiKjWuWUZSFtwulq-5N5RauR-HAkVIkXVuKEldTCasK-28_1k_VWNqG1klhS_3mLhEkEKJIjsDz691vfYas_Ht4kN4PRz6-w7cbHVymVFjTKX0t7jCCjzL553foLYtDMkQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+Graph+Attention+and+Asymmetric+Convolutional+Neural+Network+for+Deep+Image+Compression&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Tang%2C+Zhisen&rft.au=Wang%2C+Hanli&rft.au=Yi%2C+Xiaokai&rft.au=Zhang%2C+Yun&rft.date=2023-01-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=1&rft.spage=421&rft.epage=433&rft_id=info:doi/10.1109%2FTCSVT.2022.3199472&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2022_3199472 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |