Cross-modality Features Fusion for Synthetic Aperture Radar Image Segmentation
Synthetic Aperture Radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully convolutional network (FCN) methods have recently brought remarkable improvements in SAR image segmentation. Nevertheless, these methods do not utilize...
Saved in:
Published in | IEEE transactions on geoscience and remote sensing Vol. 61; p. 1 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Synthetic Aperture Radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully convolutional network (FCN) methods have recently brought remarkable improvements in SAR image segmentation. Nevertheless, these methods do not utilize the peculiarities of SAR images, leading to suboptimal segmentation accuracy. To address this issue, we rethink SAR image segmentation in terms of sequential information of transformers and cross-modal features. We first discuss the peculiarities of SAR images and extract the mean and texture features utilized as auxiliary features. The extraction of auxiliary features helps unearth the distinctive information in the SAR images. Afterward, a feature-enhanced FCN with the transformer encoder structure, termed FE-FCN, which can be extracted to context-level and pixel-level features. In FE-FCN, the features of a single-mode encoder are aligned and inserted into the model to explore the potential correspondence between modes. We also employ long skip connections to share each modality's distinguishing and particular features. Finally, we present the connection-enhanced conditional random field (CE-CRF) to capture the connection information of the image pixels. Since the CE-CRF utilizes the auxiliary features to enhance the reliability of the connection information, the segmentation results of FE-FCN are further optimized. Comparative experiments conducted on the Fangchenggang (FCG), Pucheng (PC), and Gaofen (GF) SAR datasets. Our method demonstrates superior segmentation accuracy compared to other conventional image segmentation methods, as confirmed by the experimental results. |
---|---|
AbstractList | Synthetic aperture radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully convolutional network (FCN) methods have recently brought remarkable improvements in SAR image segmentation. Nevertheless, these methods do not utilize the peculiarities of SAR images, leading to suboptimal segmentation accuracy. To address this issue, we rethink SAR image segmentation in terms of sequential information of transformers and cross-modal features. We first discuss the peculiarities of SAR images and extract the mean and texture features utilized as auxiliary features. The extraction of auxiliary features helps unearth the distinctive information in the SAR images. Afterward, a feature-enhanced FCN with the transformer encoder structure, termed FE-FCN, can be extracted to context- and pixel-level features. In FE-FCN, the features of a single-mode encoder are aligned and inserted into the model to explore the potential correspondence between modes. We also employ long skip connections to share each modality’s distinguishing and particular features. Finally, we present the connection-enhanced conditional random field (CE-CRF) to capture the connection information of the image pixels. Since the CE-CRF utilizes the auxiliary features to enhance the reliability of the connection information, the segmentation results of FE-FCN are further optimized. Comparative experiments were conducted on the Fangchenggang (FCG), Pucheng (PC), and Gaofen (GF) SAR datasets. Our method demonstrates superior segmentation accuracy compared to other conventional image segmentation methods, as confirmed by the experimental results. Synthetic Aperture Radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully convolutional network (FCN) methods have recently brought remarkable improvements in SAR image segmentation. Nevertheless, these methods do not utilize the peculiarities of SAR images, leading to suboptimal segmentation accuracy. To address this issue, we rethink SAR image segmentation in terms of sequential information of transformers and cross-modal features. We first discuss the peculiarities of SAR images and extract the mean and texture features utilized as auxiliary features. The extraction of auxiliary features helps unearth the distinctive information in the SAR images. Afterward, a feature-enhanced FCN with the transformer encoder structure, termed FE-FCN, which can be extracted to context-level and pixel-level features. In FE-FCN, the features of a single-mode encoder are aligned and inserted into the model to explore the potential correspondence between modes. We also employ long skip connections to share each modality's distinguishing and particular features. Finally, we present the connection-enhanced conditional random field (CE-CRF) to capture the connection information of the image pixels. Since the CE-CRF utilizes the auxiliary features to enhance the reliability of the connection information, the segmentation results of FE-FCN are further optimized. Comparative experiments conducted on the Fangchenggang (FCG), Pucheng (PC), and Gaofen (GF) SAR datasets. Our method demonstrates superior segmentation accuracy compared to other conventional image segmentation methods, as confirmed by the experimental results. |
Author | Li, Dongyu Gao, Fei Huang, Heqing Yue, Zhenyu Lee, Tong Heng Ge, Shuzhi Sam Zhou, Huiyu |
Author_xml | – sequence: 1 givenname: Fei orcidid: 0000-0002-1489-0812 surname: Gao fullname: Gao, Fei organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 2 givenname: Heqing orcidid: 0000-0002-7080-3701 surname: Huang fullname: Huang, Heqing organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 3 givenname: Zhenyu orcidid: 0000-0002-9497-6164 surname: Yue fullname: Yue, Zhenyu organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 4 givenname: Dongyu orcidid: 0000-0001-8338-0536 surname: Li fullname: Li, Dongyu organization: School of Cyber Science and Technology, Beihang University, Beijing, China – sequence: 5 givenname: Shuzhi Sam orcidid: 0000-0001-5549-312X surname: Ge fullname: Ge, Shuzhi Sam organization: Department of Electrical and Computer Engineering, National University of Singapore, Singapore – sequence: 6 givenname: Tong Heng orcidid: 0000-0002-2785-516X surname: Lee fullname: Lee, Tong Heng organization: Department of Electrical and Computer Engineering, National University of Singapore, Singapore – sequence: 7 givenname: Huiyu orcidid: 0000-0003-1634-9840 surname: Zhou fullname: Zhou, Huiyu organization: Department of Informatics, University of Leicester, Leicester, U.K |
BookMark | eNpNkD1vwjAQhq2qlQq0P6BSB0udQ_0VJxkRKhQJtRKwW0d8oUEkprYz8O-bCIZON9zz3sczJveta5GQF86mnLPifbfcbKeCCTmVkmW5SO_IiKdpnjCt1D0ZMV7oROSFeCTjEI6McZXybES-5t6FkDTOwqmOF7pAiJ3HQBddqF1LK-fp9tLGH4x1SWdn9EObbsCCp6sGDki3eGiwjRB7_ok8VHAK-HyrE7JbfOzmn8n6e7maz9ZJKQoVkzKFQsEestyCtlKikFIzudcaMpalJYJQWqHIK1VygKwPFaq0XIG0trJyQt6uY8_e_XYYojm6zrf9RiNyLbRiKlU9xa9UOfzosTJnXzfgL4YzM1gzgzUzWDM3a33m9ZqpEfEfL0R_RCH_ADqjawo |
CODEN | IGRSD2 |
CitedBy_id | crossref_primary_10_1109_JSTARS_2024_3376070 crossref_primary_10_3390_rs16020287 |
Cites_doi | 10.1109/TPAMI.2016.2572683 10.1109/TIP.2019.2916757 10.1109/TMI.2019.2959609 10.1109/TPAMI.2016.2644615 10.24963/ijcai.2017/307 10.1016/j.patcog.2016.11.015 10.1109/CVPR.2017.660 10.1109/MGRS.2013.2248301 10.1109/CVPR.2019.01270 10.1109/TGRS.2022.3144165 10.1109/JSTARS.2015.2502991 10.1007/s12559-016-9405-9 10.1109/JSTARS.2021.3076085 10.1007/s12559-019-09639-x 10.1007/978-3-031-25066-8_9 10.1109/CVPR.2017.353 10.1609/aaai.v34i07.6805 10.1109/LGRS.2021.3079925 10.1109/TGRS.2022.3227260 10.1109/LGRS.2018.2864342 10.1109/TGRS.2021.3130716 10.1109/TGRS.2021.3095166 10.3390/rs5020716 10.1109/ICCV.2019.00069 10.3390/rs11010020 10.1109/TPAMI.2017.2699184 10.1109/CVPRW.2018.00035 10.1109/TGRS.2022.3231253 10.1109/LGRS.2014.2307586 10.1109/CVPR.2018.00745 10.1109/JSTARS.2020.3016064 10.1109/LGRS.2015.2478256 10.1109/ICCV.2017.324 10.1080/2150704X.2020.1730472 10.1109/CVPR.2017.549 10.1109/CVPR46437.2021.00681 10.1109/TGRS.2015.2501162 10.1109/IGARSS47720.2021.9553563 10.1109/TGRS.2012.2203604 10.1109/TIM.2022.3178991 10.1007/978-3-030-87193-2_2 10.1109/LGRS.2018.2795531 10.1109/LGRS.2021.3058049 10.1109/ICIVC.2016.7571265 10.1109/LGRS.2018.2886559 10.1109/TGRS.2022.3221492 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7UA 8FD C1K F1W FR3 H8D H96 KR7 L.G L7M |
DOI | 10.1109/TGRS.2023.3307825 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) CrossRef Water Resources Abstracts Technology Research Database Environmental Sciences and Pollution Management ASFA: Aquatic Sciences and Fisheries Abstracts Engineering Research Database Aerospace Database Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources Civil Engineering Abstracts Aquatic Science & Fisheries Abstracts (ASFA) Professional Advanced Technologies Database with Aerospace |
DatabaseTitle | CrossRef Aerospace Database Civil Engineering Abstracts Aquatic Science & Fisheries Abstracts (ASFA) Professional Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources Technology Research Database ASFA: Aquatic Sciences and Fisheries Abstracts Engineering Research Database Advanced Technologies Database with Aerospace Water Resources Abstracts Environmental Sciences and Pollution Management |
DatabaseTitleList | Aerospace Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Physics |
EISSN | 1558-0644 |
EndPage | 1 |
ExternalDocumentID | 10_1109_TGRS_2023_3307825 10227299 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61071139; 61771027 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AASAJ ABQJQ ACGFO ACGFS ACIWK ACNCT AENEX AFRAH AKJIK ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS F5P HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RIG RNS RXW TAE TN5 Y6R 5VS AAYOK AAYXX AETIX AI. AIBXA CITATION EJD H~9 IBMZZ ICLAB IFJZH VH1 7UA 8FD C1K F1W FR3 H8D H96 KR7 L.G L7M |
ID | FETCH-LOGICAL-c294t-c5a94aba78da6d33e233603b66a7075cea2464e28f4c1aa729494cd14a3ddfd3 |
IEDL.DBID | RIE |
ISSN | 0196-2892 |
IngestDate | Thu Oct 10 20:31:55 EDT 2024 Fri Aug 23 02:59:02 EDT 2024 Mon Nov 04 12:04:07 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c294t-c5a94aba78da6d33e233603b66a7075cea2464e28f4c1aa729494cd14a3ddfd3 |
ORCID | 0000-0002-2785-516X 0000-0002-9497-6164 0000-0002-7080-3701 0000-0003-1634-9840 0000-0002-1489-0812 0000-0001-5549-312X 0000-0001-8338-0536 |
PQID | 2862640454 |
PQPubID | 85465 |
PageCount | 1 |
ParticipantIDs | proquest_journals_2862640454 ieee_primary_10227299 crossref_primary_10_1109_TGRS_2023_3307825 |
PublicationCentury | 2000 |
PublicationDate | 2023-01-01 |
PublicationDateYYYYMMDD | 2023-01-01 |
PublicationDate_xml | – month: 01 year: 2023 text: 2023-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on geoscience and remote sensing |
PublicationTitleAbbrev | TGRS |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 ref14 ref11 ref10 dosovitskiy (ref44) 2020 qin (ref7) 2014; 11 ref17 ref16 ref19 ref18 ref51 ref50 guo (ref52) 2022 ref46 ref48 ref47 ref42 ref43 ref49 ref8 ref9 ronneberger (ref28) 2015 ref4 ref3 ref6 ref5 ref40 ref35 ref34 vaswani (ref24) 2017; 30 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 xu (ref53) 2022 ref23 ref26 ref25 ref20 chen (ref41) 2021 ref22 ref21 ref27 ref29 xie (ref45) 2021; 34 |
References_xml | – ident: ref12 doi: 10.1109/TPAMI.2016.2572683 – year: 2020 ident: ref44 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: arXiv 2010 11929 contributor: fullname: dosovitskiy – ident: ref35 doi: 10.1109/TIP.2019.2916757 – ident: ref18 doi: 10.1109/TMI.2019.2959609 – year: 2021 ident: ref41 article-title: TransUNet: Transformers make strong encoders for medical image segmentation publication-title: arXiv 2102 04306 contributor: fullname: chen – ident: ref27 doi: 10.1109/TPAMI.2016.2644615 – ident: ref36 doi: 10.24963/ijcai.2017/307 – ident: ref5 doi: 10.1016/j.patcog.2016.11.015 – year: 2022 ident: ref52 article-title: SegNeXt: Rethinking convolutional attention design for semantic segmentation publication-title: arXiv 2209 08575 contributor: fullname: guo – ident: ref21 doi: 10.1109/CVPR.2017.660 – ident: ref1 doi: 10.1109/MGRS.2013.2248301 – ident: ref15 doi: 10.1109/CVPR.2019.01270 – ident: ref48 doi: 10.1109/TGRS.2022.3144165 – ident: ref8 doi: 10.1109/JSTARS.2015.2502991 – ident: ref6 doi: 10.1007/s12559-016-9405-9 – ident: ref33 doi: 10.1109/JSTARS.2021.3076085 – ident: ref4 doi: 10.1007/s12559-019-09639-x – ident: ref39 doi: 10.1007/978-3-031-25066-8_9 – ident: ref30 doi: 10.1109/CVPR.2017.353 – start-page: 234 year: 2015 ident: ref28 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent contributor: fullname: ronneberger – ident: ref29 doi: 10.1609/aaai.v34i07.6805 – ident: ref34 doi: 10.1109/LGRS.2021.3079925 – ident: ref11 doi: 10.1109/TGRS.2022.3227260 – ident: ref17 doi: 10.1109/LGRS.2018.2864342 – year: 2022 ident: ref53 article-title: PIDNet: A real-time semantic segmentation network inspired by PID controllers publication-title: arXiv 2206 02066 contributor: fullname: xu – volume: 34 start-page: 12077 year: 2021 ident: ref45 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers publication-title: Proc Adv Neural Inf Process Syst contributor: fullname: xie – ident: ref49 doi: 10.1109/TGRS.2021.3130716 – ident: ref47 doi: 10.1109/TGRS.2021.3095166 – ident: ref2 doi: 10.3390/rs5020716 – ident: ref22 doi: 10.1109/ICCV.2019.00069 – ident: ref19 doi: 10.3390/rs11010020 – ident: ref20 doi: 10.1109/TPAMI.2017.2699184 – ident: ref14 doi: 10.1109/CVPRW.2018.00035 – ident: ref3 doi: 10.1109/TGRS.2022.3231253 – volume: 11 start-page: 1742 year: 2014 ident: ref7 article-title: SAR image segmentation via hierarchical region merging and edge evolving with generalized gamma distribution publication-title: IEEE Geosci Remote Sens Lett doi: 10.1109/LGRS.2014.2307586 contributor: fullname: qin – ident: ref37 doi: 10.1109/CVPR.2018.00745 – ident: ref32 doi: 10.1109/JSTARS.2020.3016064 – ident: ref50 doi: 10.1109/LGRS.2015.2478256 – ident: ref38 doi: 10.1109/ICCV.2017.324 – ident: ref10 doi: 10.1080/2150704X.2020.1730472 – ident: ref23 doi: 10.1109/CVPR.2017.549 – ident: ref43 doi: 10.1109/CVPR46437.2021.00681 – ident: ref51 doi: 10.1109/TGRS.2015.2501162 – ident: ref26 doi: 10.1109/IGARSS47720.2021.9553563 – ident: ref9 doi: 10.1109/TGRS.2012.2203604 – ident: ref40 doi: 10.1109/TIM.2022.3178991 – ident: ref42 doi: 10.1007/978-3-030-87193-2_2 – ident: ref13 doi: 10.1109/LGRS.2018.2795531 – ident: ref31 doi: 10.1109/LGRS.2021.3058049 – ident: ref25 doi: 10.1109/ICIVC.2016.7571265 – volume: 30 start-page: 1 year: 2017 ident: ref24 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst contributor: fullname: vaswani – ident: ref16 doi: 10.1109/LGRS.2018.2886559 – ident: ref46 doi: 10.1109/TGRS.2022.3221492 |
SSID | ssj0014517 |
Score | 2.4717805 |
Snippet | Synthetic Aperture Radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully... Synthetic aperture radar (SAR) image segmentation stands as a formidable research frontier within the domain of SAR image interpretation. The fully... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Publisher |
StartPage | 1 |
SubjectTerms | Accuracy Coders conditional random field Conditional random fields Context modeling Convolutional neural networks cross-modality features Data mining Feature extraction fully convolutional network Image enhancement Image processing Image segmentation Pixels Radar Radar imaging Radar polarimetry SAR (radar) Synthetic aperture radar Transformers |
Title | Cross-modality Features Fusion for Synthetic Aperture Radar Image Segmentation |
URI | https://ieeexplore.ieee.org/document/10227299 https://www.proquest.com/docview/2862640454 |
Volume | 61 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB5UEPTgW1xf5OBJSN1t0u7mKOL6APfgruCtTJPUg2xX1u1Bf70zaVd8IHgrNAnTTJr5vswjACe9QnMZ70KSdUGpE9OTaNFJQtpeYRwXCbJH926QXj_o28fksUlWD7kw3vsQfOYjfgy-fDexFR-VnTE7ITBoFmGxa0ydrPXpMtBJp8mNTiWxiLhxYXba5mx0dT-M-J7wiNg7mcTkmxEKt6r82oqDfemvw2AuWR1W8hxVszyy7z-KNv5b9A1Ya5CmOK-XxiYs-HILVr_UH9yC5RD_aV-3YXDBgsrxxAVcLhgZVsTERb_i4zRB0FYM30pCizSaOH_xU34t7tHhVNyMaVMSQ_80bhKZyh0Y9S9HF9eyuWpB2tjombQJGo05dnsOU6eUj5VK2ypPU-wSqLAeY51qH5NqbQeRvkQbbV1Ho3KucGoXlspJ6fdAdL3LiZUVpk1tfcF1b5PcYorETXreqBaczqc-e6kLamSBiLRNxnrKWE9Zo6cW7PBUfmlYz2ILDufaypp_7jWLmZxpLim4_0e3A1jh0esTlENYmk0rf0SYYpYfh7X0AZiNyAo |
link.rule.ids | 315,783,787,799,27936,27937,55086 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT9swFH5iTNO2AxsMtA7YfOA0yaGNHTc-IkQpDHqgncQterEdDqgpKs1h--v3npNOsGnSbpHiJLaf4_d9fr8AjvJKcxrvSpJ2Qakzm0t06CUh7aAwTasM2aJ7PTHj7_ryNrvtgtVjLEwIITqfhYQvoy3fL1zDR2XHzE4IDNoX8JKAdW7acK3fRgOdDbroaCOJR6SdEXPQt8ez85tpwpXCE-LvpBSzZ2oo1lX5azOOGmb0DibrvrWOJfdJsyoT9_OPtI3_3fn3sNVhTXHSLo5t2Aj1Drx9koFwB15FD1D3-AEmp9xROV_4iMwFY8OGuLgYNXygJgjciumPmvAivU2cPIQl3xY36HEpLua0LYlpuJt3oUz1LsxGZ7PTseyKLUiXWr2SLkOrscRh7tF4pUKqlOmr0hgcEqxwAVNtdEhJuG6ASCPRVjs_0Ki8r7zag816UYePIIbBl8TLKtuntqHizLdZ6dAgsZM8WNWDr-upLx7alBpFpCJ9W7CcCpZT0cmpB7s8lU8atrPYg4O1tIrur3ssUqZnmpMKfvrHY1_g9Xh2fVVcXUy-7cMb_lJ7nnIAm6tlEw4JYazKz3Fd_QI6MctV |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cross-Modality+Features+Fusion+for+Synthetic+Aperture+Radar+Image+Segmentation&rft.jtitle=IEEE+transactions+on+geoscience+and+remote+sensing&rft.au=Gao%2C+Fei&rft.au=Huang%2C+Heqing&rft.au=Yue%2C+Zhenyu&rft.au=Li%2C+Dongyu&rft.date=2023-01-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0196-2892&rft.eissn=1558-0644&rft.volume=61&rft.spage=1&rft_id=info:doi/10.1109%2FTGRS.2023.3307825&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0196-2892&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0196-2892&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0196-2892&client=summon |