Mixture separability loss in a deep convolutional network for image classification
In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well-known networks only consider modifying the network structures and applying cross-entropy loss at the end of the network. However, using only cross-entropy loss causes a ne...
Saved in:
Published in | IET image processing Vol. 13; no. 1; pp. 135 - 141 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
The Institution of Engineering and Technology
01.01.2019
|
Subjects | |
Online Access | Get full text |
ISSN | 1751-9659 1751-9667 |
DOI | 10.1049/iet-ipr.2018.5613 |
Cover
Loading…
Abstract | In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well-known networks only consider modifying the network structures and applying cross-entropy loss at the end of the network. However, using only cross-entropy loss causes a network to stop updating weights when all training images are correctly classified. This is the problem of early saturation. This study proposes a novel cost function, called mixture separability loss (MSL), which updates the weights of the network even when most of the training images are accurately predicted. MSL consists of between-class and within-class loss. Between-class loss maximises the differences between inter-class images, whereas within-class loss minimises the similarities between intra-class images. They designed the proposed loss function to attach to different convolutional layers in the network in order to utilise intermediate feature maps. Experiments show that a network with MSL deepens the learning process and obtains promising results with some public datasets, such as Street View House Number, Canadian Institute for Advanced Research, and the authors’ self-collected Inha Computer Vision Lab gender dataset. |
---|---|
AbstractList | In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well‐known networks only consider modifying the network structures and applying cross‐entropy loss at the end of the network. However, using only cross‐entropy loss causes a network to stop updating weights when all training images are correctly classified. This is the problem of early saturation. This study proposes a novel cost function, called mixture separability loss (MSL), which updates the weights of the network even when most of the training images are accurately predicted. MSL consists of between‐class and within‐class loss. Between‐class loss maximises the differences between inter‐class images, whereas within‐class loss minimises the similarities between intra‐class images. They designed the proposed loss function to attach to different convolutional layers in the network in order to utilise intermediate feature maps. Experiments show that a network with MSL deepens the learning process and obtains promising results with some public datasets, such as Street View House Number, Canadian Institute for Advanced Research, and the authors’ self‐collected Inha Computer Vision Lab gender dataset. |
Author | Do, Trung Dung Nguyen, Van Huan Jin, Cheng-Bin Kim, Hakil |
Author_xml | – sequence: 1 givenname: Trung Dung surname: Do fullname: Do, Trung Dung organization: 1Department of Information and Communication Engineering, Inha University, 22212, Incheon, Republic of Korea – sequence: 2 givenname: Cheng-Bin orcidid: 0000-0001-8486-5738 surname: Jin fullname: Jin, Cheng-Bin organization: 1Department of Information and Communication Engineering, Inha University, 22212, Incheon, Republic of Korea – sequence: 3 givenname: Van Huan surname: Nguyen fullname: Nguyen, Van Huan organization: 2Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam – sequence: 4 givenname: Hakil orcidid: 0000-0003-4232-3804 surname: Kim fullname: Kim, Hakil email: hikim@inha.ac.kr organization: 1Department of Information and Communication Engineering, Inha University, 22212, Incheon, Republic of Korea |
BookMark | eNqFkMFOwzAMhiM0JLbBA3DLC3QkTdM23GBiMGkINO0epYmLMkJTJR1jb0_LEAcOcLIt-bN_fRM0anwDCF1SMqMkE1cWusS2YZYSWs54TtkJGtOC00TkeTH66bk4Q5MYt4RwQUo-RutH-9HtAuAIrQqqss52B-x8jNg2WGED0GLtm3fvdp31jXK4gW7vwyuufcD2Tb0A1k7FaGur1bByjk5r5SJcfNcp2izuNvOHZPV0v5zfrBLNioIltalNzrmugOvMaMEEYbXRukiVzoCVWWYoiLQyacmFglyZjBlGKwaU5jmwKaLHszr0aQPUsg19nHCQlMjBieydyN6JHJzIwUnPFL8Ybbuv0F1Q1v1JXh_JvXVw-P-VXD6v09tFP4oBTo7wsLb1u9BrjH88-wQvFY69 |
CitedBy_id | crossref_primary_10_1007_s40815_019_00724_9 |
Cites_doi | 10.1109/CVPR.2015.7298682 10.1109/CVPR.2015.7298594 10.1109/TCSVT.2017.2718225 10.1109/TCSVT.2015.2477937 10.1109/CVPR.2005.202 10.1109/CVPR.2016.308 10.1109/CVPR.2017.668 10.1016/j.patcog.2017.10.013 10.1109/CVPR.2017.634 10.1007/978-3-319-46493-0_38 10.1016/j.patcog.2017.09.015 10.1007/978-3-319-11740-9_34 10.1109/TIT.1981.1056373 10.1109/CVPR.2016.90 |
ContentType | Journal Article |
Copyright | The Institution of Engineering and Technology 2021 The Authors. IET Image Processing published by John Wiley & Sons, Ltd. on behalf of The Institution of Engineering and Technology |
Copyright_xml | – notice: The Institution of Engineering and Technology – notice: 2021 The Authors. IET Image Processing published by John Wiley & Sons, Ltd. on behalf of The Institution of Engineering and Technology |
DBID | AAYXX CITATION |
DOI | 10.1049/iet-ipr.2018.5613 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISSN | 1751-9667 |
EndPage | 141 |
ExternalDocumentID | 10_1049_iet_ipr_2018_5613 IPR2BF01893 |
Genre | article |
GrantInformation_xml | – fundername: Industrial Technology Innovation Program |
GroupedDBID | 0R 24P 29I 5GY 6IK 8VB AAJGR ABPTK ACGFS ACIWK AENEX ALMA_UNASSIGNED_HOLDINGS BFFAM CS3 DU5 ESX HZ IFIPE IPLJI JAVBF LAI M43 MS O9- OCL P2P QWB RIE RNS RUI UNR ZL0 .DC 0R~ 1OC 4.4 8FE 8FG AAHHS AAHJG ABJCF ABQXS ACCFJ ACCMX ACESK ACXQS ADZOD AEEZP AEQDE AFKRA AIWBW AJBDE ALUQN ARAPS AVUZU BENPR BGLVJ CCPQU EBS EJD GROUPED_DOAJ HCIFZ HZ~ IAO ITC K1G L6V M7S MCNEO MS~ OK1 P62 PTHSS ROL S0W AAMMB AAYXX AEFGJ AGXDD AIDQK AIDYY CITATION IDLOA PHGZM PHGZT |
ID | FETCH-LOGICAL-c3773-fdfd655cbe5c4dc93903fdcc72ac4e3844d1e92bd2859ae6ad43d31b3e1166e3 |
IEDL.DBID | 24P |
ISSN | 1751-9659 |
IngestDate | Sun Jul 06 05:06:53 EDT 2025 Thu Apr 24 22:56:56 EDT 2025 Wed Jan 22 16:32:21 EST 2025 Tue Jan 05 21:45:39 EST 2021 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | inter-class images within-class loss deep convolutional network well-known networks image classification street view house number self-collected Inha computer vision lab gender dataset MSL training images novel cost function machine learning loss function entropy intra-class images between-class loss image representation computer vision mixture separability loss Canadian institute for advanced research learning (artificial intelligence) convolutional layers network structures feedforward neural nets applying cross-entropy loss |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3773-fdfd655cbe5c4dc93903fdcc72ac4e3844d1e92bd2859ae6ad43d31b3e1166e3 |
ORCID | 0000-0001-8486-5738 0000-0003-4232-3804 |
OpenAccessLink | https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/iet-ipr.2018.5613 |
PageCount | 7 |
ParticipantIDs | wiley_primary_10_1049_iet_ipr_2018_5613_IPR2BF01893 crossref_primary_10_1049_iet_ipr_2018_5613 iet_journals_10_1049_iet_ipr_2018_5613 crossref_citationtrail_10_1049_iet_ipr_2018_5613 |
ProviderPackageCode | RUI |
PublicationCentury | 2000 |
PublicationDate | January 2019 |
PublicationDateYYYYMMDD | 2019-01-01 |
PublicationDate_xml | – month: 01 year: 2019 text: January 2019 |
PublicationDecade | 2010 |
PublicationTitle | IET image processing |
PublicationYear | 2019 |
Publisher | The Institution of Engineering and Technology |
Publisher_xml | – name: The Institution of Engineering and Technology |
References | Shaham, U.; Lederman, R.R. (C23) 2018; 74 Shore, J.; Johnson, R. (C26) 1981; 27 Zhang, K.; Guo, L.; Gao, C. (C32) 2017 Zhang, K.; Sun, M.; Han, X. (C11) 2017; 14 Xu, C.; Lu, C.; Liang, X. (C22) 2016; 26 Gu, J.; Wang, Z.; Kuen, J. (C28) 2017; 77 Janocha, K.; Czarnecki, W.M. (C20) 2016; 25 December 2011 December 2012 2012 June 2010 2010 June 2013 2009 1981; 27 June 2005 May 2015 February 2017 October 2014 October 2016 October 2015 2017; 14 July 2017 2017; 77 June 2015 June 2016 2017 2016 2018; 74 2015 2013 2016; 26 2016; 25 e_1_2_7_6_1 e_1_2_7_5_1 e_1_2_7_4_1 e_1_2_7_3_1 e_1_2_7_9_1 e_1_2_7_8_1 e_1_2_7_7_1 e_1_2_7_19_1 e_1_2_7_18_1 e_1_2_7_17_1 e_1_2_7_16_1 e_1_2_7_2_1 e_1_2_7_15_1 e_1_2_7_14_1 e_1_2_7_13_1 e_1_2_7_12_1 Janocha K. (e_1_2_7_21_1) 2016; 25 e_1_2_7_11_1 e_1_2_7_10_1 e_1_2_7_26_1 e_1_2_7_27_1 e_1_2_7_28_1 e_1_2_7_29_1 e_1_2_7_30_1 e_1_2_7_25_1 e_1_2_7_31_1 e_1_2_7_24_1 e_1_2_7_32_1 e_1_2_7_23_1 e_1_2_7_22_1 e_1_2_7_34_1 e_1_2_7_20_1 Zhang K. (e_1_2_7_33_1) 2017 |
References_xml | – start-page: 1 year: 2017 end-page: 11 ident: C32 article-title: Pyramidal RoR for image classification publication-title: Cluster Comput. – volume: 25 start-page: 49 year: 2016 end-page: 59 ident: C20 article-title: On loss functions for deep neural networks in classification publication-title: Theor. Found. Mach.e Learn. – volume: 14 start-page: 1 issue: 8 year: 2017 end-page: 12 ident: C11 article-title: Residual networks of residual networks: multilevel residual networks publication-title: IEEE Trans. Circuits Syst. Video Technol. – volume: 77 start-page: 354 year: 2017 end-page: 377 ident: C28 article-title: Recent advances in convolutional neural networks publication-title: Pattern Recognit. – volume: 26 start-page: 2273 issue: 12 year: 2016 end-page: 2283 ident: C22 article-title: Multi-loss regularized deep neural network publication-title: IEEE Trans. Circuits Syst. Video Technol. – volume: 74 start-page: 52 year: 2018 end-page: 63 ident: C23 article-title: Learning by coincidence: Siamese networks and common variable learning publication-title: Pattern Recognit. – volume: 27 start-page: 472 issue: 4 year: 1981 end-page: 482 ident: C26 article-title: Properties of cross-entropy minimization publication-title: IEEE Trans. Inf. Theory – year: 2009 – start-page: 4278 year: February 2017 end-page: 4284 – volume: 26 start-page: 2273 issue: 12 year: 2016 end-page: 2283 article-title: Multi‐loss regularized deep neural network publication-title: IEEE Trans. Circuits Syst. Video Technol. – start-page: 5987 year: July 2017 end-page: 5995 – start-page: 1058 year: June 2013 end-page: 1066 – start-page: 770 year: June 2016 end-page: 778 – start-page: 1 year: October 2015 end-page: 9 – volume: 27 start-page: 472 issue: 4 year: 1981 end-page: 482 article-title: Properties of cross‐entropy minimization publication-title: IEEE Trans. Inf. Theory – start-page: 6307 year: July 2017 end-page: 6315 – start-page: 807 year: June 2010 end-page: 814 – start-page: 364 year: October 2014 end-page: 375 – year: 2016 – start-page: 2818 year: June 2016 end-page: 2826 – year: 2012 – year: May 2015 – start-page: 1097 year: December 2012 end-page: 1105 – volume: 25 start-page: 49 year: 2016 end-page: 59 article-title: On loss functions for deep neural networks in classification publication-title: Theor. Found. Mach.e Learn. – volume: 14 start-page: 1 issue: 8 year: 2017 end-page: 12 article-title: Residual networks of residual networks: multilevel residual networks publication-title: IEEE Trans. Circuits Syst. Video Technol. – start-page: 1800 year: July 2017 end-page: 1807 – start-page: 630 year: October 2016 end-page: 645 – start-page: 815 year: June 2015 end-page: 823 – start-page: 507 year: June 2016 end-page: 516 – volume: 77 start-page: 354 year: 2017 end-page: 377 article-title: Recent advances in convolutional neural networks publication-title: Pattern Recognit. – year: December 2011 – start-page: 249 year: 2010 end-page: 256 – volume: 74 start-page: 52 year: 2018 end-page: 63 article-title: Learning by coincidence: Siamese networks and common variable learning publication-title: Pattern Recognit. – year: 2015 – start-page: 539 year: June 2005 end-page: 546 – start-page: 1 year: 2017 end-page: 11 article-title: Pyramidal RoR for image classification publication-title: Cluster Comput. – year: 2013 – ident: e_1_2_7_20_1 – ident: e_1_2_7_14_1 – ident: e_1_2_7_26_1 doi: 10.1109/CVPR.2015.7298682 – ident: e_1_2_7_3_1 – ident: e_1_2_7_7_1 doi: 10.1109/CVPR.2015.7298594 – ident: e_1_2_7_22_1 – ident: e_1_2_7_28_1 – ident: e_1_2_7_12_1 doi: 10.1109/TCSVT.2017.2718225 – ident: e_1_2_7_23_1 doi: 10.1109/TCSVT.2015.2477937 – ident: e_1_2_7_4_1 – start-page: 1 year: 2017 ident: e_1_2_7_33_1 article-title: Pyramidal RoR for image classification publication-title: Cluster Comput. – ident: e_1_2_7_8_1 – ident: e_1_2_7_25_1 doi: 10.1109/CVPR.2005.202 – ident: e_1_2_7_10_1 – ident: e_1_2_7_16_1 – ident: e_1_2_7_5_1 doi: 10.1109/CVPR.2016.308 – ident: e_1_2_7_2_1 – ident: e_1_2_7_30_1 – ident: e_1_2_7_32_1 – ident: e_1_2_7_34_1 doi: 10.1109/CVPR.2017.668 – ident: e_1_2_7_19_1 – ident: e_1_2_7_29_1 doi: 10.1016/j.patcog.2017.10.013 – ident: e_1_2_7_13_1 doi: 10.1109/CVPR.2017.634 – ident: e_1_2_7_18_1 – ident: e_1_2_7_6_1 – ident: e_1_2_7_9_1 doi: 10.1007/978-3-319-46493-0_38 – ident: e_1_2_7_24_1 doi: 10.1016/j.patcog.2017.09.015 – ident: e_1_2_7_17_1 doi: 10.1007/978-3-319-11740-9_34 – volume: 25 start-page: 49 year: 2016 ident: e_1_2_7_21_1 article-title: On loss functions for deep neural networks in classification publication-title: Theor. Found. Mach.e Learn. – ident: e_1_2_7_27_1 doi: 10.1109/TIT.1981.1056373 – ident: e_1_2_7_31_1 – ident: e_1_2_7_11_1 doi: 10.1109/CVPR.2016.90 – ident: e_1_2_7_15_1 |
SSID | ssj0059085 |
Score | 2.1334858 |
Snippet | In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well-known networks only consider... In machine learning, the cost function is crucial because it measures how good or bad a system is. In image classification, well‐known networks only consider... |
SourceID | crossref wiley iet |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 135 |
SubjectTerms | applying cross‐entropy loss between‐class loss Canadian institute for advanced research computer vision convolutional layers deep convolutional network entropy feedforward neural nets image classification image representation inter‐class images intra‐class images learning (artificial intelligence) loss function machine learning mixture separability loss MSL network structures novel cost function Research Article self‐collected Inha computer vision lab gender dataset street view house number training images well‐known networks within‐class loss |
SummonAdditionalLinks | – databaseName: IET Digital Library Open Access dbid: IDLOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEF5se_HiW6wv9iAehNgku0m6R1-lFStSKvQW9hUJ1FhsBf33ziTbSkGqt2yymcDMZveb2dn5CDlLkKlOBgqQm2yDgxKFHvgq2jOAnqXPQiUlxjv6j3H3md-PotHP8WiTvyBXhjePuGG03FYnDzB1G-bhltNxRUgC-LYFHbx8grU9g_Yl4uEaaYQJOF910ujdPqCLVc3MSO8dlQckkVo-jsRil_MXIUvrVA0eL6PXcvnpbJENhxvpVWXobbJmix2y6TAkdX_odJcM-vknbgrQqcWq3mXq6xcdwydoXlBJjbUTiqnmbsiBzKLKBKcAX2n-CvML1QipMYeoNNseGXbuhjddz_EmeJolCfMyk5k4irSykeZGCyZ8lhmtk1BqblmbcxNYESqDxeukjaXhzLBAMRsEcWzZPqkXb4U9IFQAWDGKSZX5miecKwVoARtWwJqmZJP4cyWl2tUUR2qLcVrubXORguJS0GuKek1Rr01ysXhlUhXUWNX5HO_NTb6qIyuN87fItPc0CK870BTs8L_ij8g6XIsq9HJM6rP3D3sCYGSmTt0Y-wbh5twD priority: 102 providerName: Institution of Engineering and Technology |
Title | Mixture separability loss in a deep convolutional network for image classification |
URI | http://digital-library.theiet.org/content/journals/10.1049/iet-ipr.2018.5613 https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fiet-ipr.2018.5613 |
Volume | 13 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3ZS8MwGA87XnzxFucx8iA-CNW1SdvlcU7HJk7H2GT4UnIVCrMON0H_e7-v7QZDmOBTSZoDviP5JfkOQi5CzFQnXQXITTbhgOJ7DpxVtGMAPcsG85SUeN_Rfwq6Y_4w8Scl0l76wuTxIVYXbqgZ2XqNCi5VnoUEQC0wMbELJ5lhSE-3eY0wuEyq6GKLwu7xwXI5xpzefuYVifnkA1-snjbFza8h1janMvxeh6zZntPZJdsFWKStnLt7pGTTfbJTAEdaqOX8gAz7yRe-BNC5xVDemb3rN53CFDRJqaTG2hlF-_JCzmDMNDf_poBZafIGiwrViKPRcCjj1SEZde5H7a5TJEtwNAtD5sQmNoHva2V9zY0WTDRYbLQOPam5ZU3OjWuFpwxGrJM2kIYzw1zFrOsGgWVHpJK-p_aYUAEIxSgmVdzQPORcKYAIWLACNjIla6SxJFKki0DimM9iGmUP2lxEQLgI6BohXSOka41crbrM8igamxpfYl2hS_NNDVnGnL-HjHqDoXfbgaJgJ__qdUq2oF7kly9npLL4-LTnAEcWqp6JW51UWy_j1zF8e3ePz60fxMXdaw |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA61HvTiW6zPHMSDsNrdZHebo6_SaltKqdDbktfCQl2LraD_3pnstlCECh6TTbIwySRfZibfEHIZY6Y66StAbrIBF5Qw8OCuoj0D6FnWWaCkRHtHtxe1XvnzKBxVyOP8LUzBD7EwuKFmuP0aFRwN0sWFkyNJZmZnXjZBTk-_cYM4eI2s8yiIMYFDwPvz_RiTeofuWSQmlI9CsfBtittfQyydTmvweRmzukOnuUO2SrRI74rp3SUVm--R7RI50lIvp_tk0M2-0BVApxa5vF3A6zcdwy9ollNJjbUTigHm5UKDMfMi_psCaKXZG-wqVCOQxsghN1kHZNh8Gj60vDJbgqdZHDMvNamJwlArG2putGCizlKjdRxIzS1rcG58KwJlkLJO2kgazgzzFbO-H0WWHZJq_p7bI0IFQBSjmFRpXfOYc6UAI2DBCjjJlKyR-lxIiS6ZxDGhxThxHm0uEhBcAnJNUK4JyrVGrhddJgWNxqrGV1hXKtN0VUPmJufvIZN2fxDcN6Eo2PG_el2Qjdaw20k67d7LCdmENqKwxJyS6uzj054BNpmpc7f0fgDZFt2N |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwEA9zgvjitzg_8yA-CNW1SdvlUadlUzfGmLC3kq9CYdbiJuh_713bDYYwwcekSQqXXPLL3eV3hFyGmKlOugqQm2zBBcX3HLiraMcAepZN5ikp0d7R6wedV_409sc10p6_hSn5IRYGN9SMYr9GBc9NUt43OXJkpnbmpDlSerqtG4TBa2QdnX64zD0-mG_HmNPbL15FYj75wBcL16a4_TXE0uG0Bp-XIWtx5kQ7ZKsCi_SunN1dUrPZHtmugCOt1HK6T4a99As9AXRqkcq7iHf9phP4BU0zKqmxNqcYX16tMxgzK8O_KWBWmr7BpkI14mgMHCrm6oCMosdRu-NUyRIczcKQOYlJTOD7Wllfc6MFE02WGK1DT2puWYtz41rhKYOMddIG0nBmmKuYdd0gsOyQ1LP3zB4RKgChGMWkSpqah5wrBRABC1bAQaZkgzTnQop1RSSO-SwmceHQ5iIGwcUg1xjlGqNcG-R60SUvWTRWNb7CukqXpqsasmJy_h4y7g6G3n0ERcGO_9XrgmwMHqL4pdt_PiGb0ESUdphTUp99fNozQCYzdV6svB-r9ty_ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Mixture+separability+loss+in+a+deep+convolutional+network+for+image+classification&rft.jtitle=IET+image+processing&rft.au=Do%2C+Trung+Dung&rft.au=Jin%2C+Cheng%E2%80%90Bin&rft.au=Nguyen%2C+Van+Huan&rft.au=Kim%2C+Hakil&rft.date=2019-01-01&rft.pub=The+Institution+of+Engineering+and+Technology&rft.issn=1751-9659&rft.eissn=1751-9667&rft.volume=13&rft.issue=1&rft.spage=135&rft.epage=141&rft_id=info:doi/10.1049%2Fiet-ipr.2018.5613&rft.externalDBID=10.1049%252Fiet-ipr.2018.5613&rft.externalDocID=IPR2BF01893 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9659&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9659&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9659&client=summon |