An experimental comparison of the widely used pre‐trained deep neural networks for image classification tasks towards revealing the promise of transfer‐learning
Summary The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning platforms provide various pre‐trained deep neural networks that can be easily applied for image classification tasks. So, “Which pre‐trai...
Saved in:
Published in | Concurrency and computation Vol. 34; no. 24 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.11.2022
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Summary
The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning platforms provide various pre‐trained deep neural networks that can be easily applied for image classification tasks. So, “Which pre‐trained model provides the best performance for image classification tasks?” is a question that instinctively comes to mind and should be shed light on by the research community. To this end, we propose an experimental comparison of the six popular pre‐trained deep neural networks, namely, (i) VGG19, (ii) ResNet50, (iii) DenseNet201, (iv) MobileNetV2, (v) InceptionV3, and (vi) Xception by employing them through the transfer‐learning technique. Then, the proposed benchmark models were both trained and evaluated under the same configurations on two gold‐standard datasets, namely, (i) CIFAR‐10 and (ii) Stanford Dogs to benchmark them. Three evaluation metrics were employed to measure performance differences between the employed pre‐trained models as follows: (i) Accuracy, (ii) training duration, and (iii) inference time. The key findings that were obtained through the conducted a wide variety of experiments were discussed. |
---|---|
AbstractList | Summary
The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning platforms provide various pre‐trained deep neural networks that can be easily applied for image classification tasks. So, “
Which pre‐trained model provides the best performance for image classification tasks?
” is a question that instinctively comes to mind and should be shed light on by the research community. To this end, we propose an experimental comparison of the six popular pre‐trained deep neural networks, namely, (i)
VGG19
, (ii)
ResNet50
, (iii)
DenseNet201
, (iv)
MobileNetV2
, (v)
InceptionV3
, and (vi)
Xception
by employing them through the transfer‐learning technique. Then, the proposed benchmark models were both trained and evaluated under the same configurations on two gold‐standard datasets, namely, (i)
CIFAR‐10
and (ii)
Stanford Dogs
to benchmark them. Three evaluation metrics were employed to measure performance differences between the employed pre‐trained models as follows: (i)
Accuracy
, (ii)
training duration
, and (iii)
inference time
. The key findings that were obtained through the conducted a wide variety of experiments were discussed. Summary The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning platforms provide various pre‐trained deep neural networks that can be easily applied for image classification tasks. So, “Which pre‐trained model provides the best performance for image classification tasks?” is a question that instinctively comes to mind and should be shed light on by the research community. To this end, we propose an experimental comparison of the six popular pre‐trained deep neural networks, namely, (i) VGG19, (ii) ResNet50, (iii) DenseNet201, (iv) MobileNetV2, (v) InceptionV3, and (vi) Xception by employing them through the transfer‐learning technique. Then, the proposed benchmark models were both trained and evaluated under the same configurations on two gold‐standard datasets, namely, (i) CIFAR‐10 and (ii) Stanford Dogs to benchmark them. Three evaluation metrics were employed to measure performance differences between the employed pre‐trained models as follows: (i) Accuracy, (ii) training duration, and (iii) inference time. The key findings that were obtained through the conducted a wide variety of experiments were discussed. The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning platforms provide various pre‐trained deep neural networks that can be easily applied for image classification tasks. So, “Which pre‐trained model provides the best performance for image classification tasks?” is a question that instinctively comes to mind and should be shed light on by the research community. To this end, we propose an experimental comparison of the six popular pre‐trained deep neural networks, namely, (i) VGG19, (ii) ResNet50, (iii) DenseNet201, (iv) MobileNetV2, (v) InceptionV3, and (vi) Xception by employing them through the transfer‐learning technique. Then, the proposed benchmark models were both trained and evaluated under the same configurations on two gold‐standard datasets, namely, (i) CIFAR‐10 and (ii) Stanford Dogs to benchmark them. Three evaluation metrics were employed to measure performance differences between the employed pre‐trained models as follows: (i) Accuracy, (ii) training duration, and (iii) inference time. The key findings that were obtained through the conducted a wide variety of experiments were discussed. |
Author | Kabakus, Abdullah Talha Erdogmus, Pakize |
Author_xml | – sequence: 1 givenname: Abdullah Talha orcidid: 0000-0003-2181-4292 surname: Kabakus fullname: Kabakus, Abdullah Talha email: talhakabakus@gmail.com organization: Duzce University – sequence: 2 givenname: Pakize surname: Erdogmus fullname: Erdogmus, Pakize organization: Duzce University |
BookMark | eNp1kUtuGzEMhoUiAZoX0CMIyKabSfSYGWWWgZG0BQI0i2Q9oCUqlTOWptS4rnc9Qg-Rk-Ukke2iu6xIgh_5E_yP2UFMERn7JMWFFEJd2hEvjJLtB3YkG60q0er64H-u2o_sOOeFEFIKLY_Yy3Xk-HtECkuMEwzcpuUIFHKKPHk-_UC-Dg6HDV9ldHwkfP3zdyIIsVQOceQRV1TmIk7rRM-Z-0Q8LOEJuR0g5-CDhSmUdRPk0p7SGshlTvgLYQjxaacxUlqGjDtJgpg9UtEZECgW5JQdehgynv2LJ-zx9uZh9rW6-_7l2-z6rrKq0211VV91VkBtWt1I26Gx3mgjXCN0PXeg5kZD55VycyPQg8Wudhp8bSSgdGj1CTvf7y3n_FxhnvpFWlEskr0ySrSNbBpdqM97ylLKmdD3Y_ke0KaXot960BcP-q0HBa326DoMuHmX62f3Nzv-Dd-2kF4 |
CitedBy_id | crossref_primary_10_1016_j_wneu_2024_04_168 crossref_primary_10_1186_s13677_022_00386_3 crossref_primary_10_1038_s41598_024_52823_9 |
Cites_doi | 10.1142/S1793351X16500045 10.35377/saucis.03.03.776573 10.1109/CCBD.2016.029 10.1016/j.conbuildmat.2017.09.110 10.1109/CVPR.2016.90 10.1109/IIPHDW.2018.8388337 10.1109/CVPR.2016.308 10.1109/PDP.2010.43 10.1109/IJCNN.2012.6252544 10.22260/ISARC2018/0094 10.11591/IJECE.V10I5.PP5479‐5486 10.1109/IACS.2018.8355444 10.1109/CVPR.2017.243 10.1145/2647868.2654889 10.1109/CVPR.2018.00474 10.1038/s41586‐020‐2649‐2 10.1214/aoms/1177729586 10.1109/ACCESS.2018.2830661 10.26650/acin.880918 10.1109/CVPR.2017.195 10.1109/CVPR.2009.5206848 10.1109/ICDCS.2018.00125 10.1109/ColComCon.2017.8088219 |
ContentType | Journal Article |
Copyright | 2022 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2022 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cpe.7216 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1532-0634 |
EndPage | n/a |
ExternalDocumentID | 10_1002_cpe_7216 CPE7216 |
Genre | article |
GroupedDBID | .3N .DC .GA 05W 0R~ 10A 1L6 1OC 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 930 A03 AAESR AAEVG AAHHS AANLZ AAONW AAXRX AAZKR ABCQN ABCUV ABEML ABIJN ACAHQ ACCFJ ACCZN ACPOU ACSCC ACXBN ACXQS ADBBV ADEOM ADIZJ ADKYN ADMGS ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN AMBMR AMYDB ATUGU AUFTA AZBYB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM EBS F00 F01 F04 F5P G-S G.N GNP GODZA HGLYW HHY HZ~ IX1 JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A O66 O9- OIG P2W P2X P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 SUPJJ TN5 UB1 V2E W8V W99 WBKPD WIH WIK WOHZO WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT .Y3 31~ AASGY AAYXX ACBWZ AFZJQ ASPBG AVWKF AZFZN CITATION EJD FEDTE HF~ HVGLF LW6 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2936-8489c0a476351c9e7cf7370d5034bda2b73a9f22db70eface94d3af471ae1dec3 |
IEDL.DBID | DR2 |
ISSN | 1532-0626 |
IngestDate | Thu Oct 10 18:53:30 EDT 2024 Fri Aug 23 01:52:53 EDT 2024 Sat Aug 24 01:02:05 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 24 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2936-8489c0a476351c9e7cf7370d5034bda2b73a9f22db70eface94d3af471ae1dec3 |
ORCID | 0000-0003-2181-4292 |
PQID | 2720651553 |
PQPubID | 2045170 |
PageCount | 15 |
ParticipantIDs | proquest_journals_2720651553 crossref_primary_10_1002_cpe_7216 wiley_primary_10_1002_cpe_7216_CPE7216 |
PublicationCentury | 2000 |
PublicationDate | 1 November 2022 |
PublicationDateYYYYMMDD | 2022-11-01 |
PublicationDate_xml | – month: 11 year: 2022 text: 1 November 2022 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Hoboken, USA |
PublicationPlace_xml | – name: Hoboken, USA – name: Hoboken |
PublicationTitle | Concurrency and computation |
PublicationYear | 2022 |
Publisher | John Wiley & Sons, Inc Wiley Subscription Services, Inc |
Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley Subscription Services, Inc |
References | 2018; 6 2021; 5 2020; 3 2012 2011 2021 2010 2009 2016; 10 2019 2018 1951; 22 2017 2016 2020; 585 2015 2014 2020; 10 2013 2017; 157 e_1_2_9_30_1 e_1_2_9_31_1 e_1_2_9_11_1 e_1_2_9_34_1 e_1_2_9_10_1 e_1_2_9_35_1 e_1_2_9_13_1 e_1_2_9_32_1 e_1_2_9_12_1 e_1_2_9_33_1 e_1_2_9_15_1 e_1_2_9_38_1 e_1_2_9_14_1 e_1_2_9_39_1 e_1_2_9_17_1 e_1_2_9_36_1 e_1_2_9_16_1 e_1_2_9_37_1 e_1_2_9_19_1 e_1_2_9_18_1 e_1_2_9_41_1 e_1_2_9_42_1 e_1_2_9_20_1 e_1_2_9_40_1 e_1_2_9_22_1 e_1_2_9_21_1 e_1_2_9_24_1 e_1_2_9_43_1 e_1_2_9_23_1 e_1_2_9_44_1 e_1_2_9_8_1 e_1_2_9_7_1 e_1_2_9_6_1 e_1_2_9_5_1 e_1_2_9_4_1 e_1_2_9_3_1 e_1_2_9_2_1 Chollet F (e_1_2_9_29_1) 2017 e_1_2_9_9_1 e_1_2_9_26_1 e_1_2_9_25_1 e_1_2_9_28_1 e_1_2_9_27_1 |
References_xml | – year: 2011 – volume: 585 start-page: 357 year: 2020 end-page: 362 article-title: Array programming with NumPy publication-title: Nature – year: 2009 – volume: 10 start-page: 417 issue: 3 year: 2016 end-page: 439 article-title: Deep learning publication-title: Int J Semant Comput – start-page: 1097 year: 2012 end-page: 1105 – volume: 3 start-page: 169 issue: 3 year: 2020 end-page: 182 article-title: A comparison of the state‐of‐the‐art deep learning platforms: an experimental study publication-title: Sakarya Univ J Comput Inform Sci – volume: 22 start-page: 400 issue: 3 year: 1951 end-page: 407 article-title: A stochastic approximation method publication-title: Ann Math Stat – volume: 6 start-page: 24411 year: 2018 end-page: 24432 article-title: A survey of deep learning: platforms, applications and emerging research trends publication-title: IEEE Access – year: 2021 – volume: 157 start-page: 322 year: 2017 end-page: 330 article-title: Deep convolutional neural networks with transfer learning for computer vision‐based data‐driven pavement distress detection publication-title: Construc Build Mater – volume: 10 start-page: 5479 issue: 5 year: 2020 end-page: 5486 article-title: Benchmarking open source deep learning frameworks publication-title: Int J Electric Comput Eng – year: 2017 – year: 2016 – year: 2018 – volume: 5 start-page: 141 issue: 1 year: 2021 end-page: 154 article-title: Performance comparison of different pre‐trained deep learning models in classifying brain MRI images publication-title: Acta Infol – year: 2019 – year: 2014 – year: 2015 – year: 2010 – year: 2012 – year: 2013 – ident: e_1_2_9_42_1 doi: 10.1142/S1793351X16500045 – ident: e_1_2_9_23_1 doi: 10.35377/saucis.03.03.776573 – ident: e_1_2_9_3_1 – ident: e_1_2_9_6_1 – ident: e_1_2_9_24_1 doi: 10.1109/CCBD.2016.029 – ident: e_1_2_9_18_1 doi: 10.1016/j.conbuildmat.2017.09.110 – ident: e_1_2_9_20_1 – ident: e_1_2_9_7_1 – ident: e_1_2_9_13_1 doi: 10.1109/CVPR.2016.90 – ident: e_1_2_9_36_1 – ident: e_1_2_9_22_1 doi: 10.1109/IIPHDW.2018.8388337 – ident: e_1_2_9_43_1 – ident: e_1_2_9_39_1 – volume-title: Deep Learning with Python year: 2017 ident: e_1_2_9_29_1 contributor: fullname: Chollet F – ident: e_1_2_9_14_1 doi: 10.1109/CVPR.2016.308 – ident: e_1_2_9_16_1 – ident: e_1_2_9_4_1 – ident: e_1_2_9_32_1 – ident: e_1_2_9_31_1 doi: 10.1109/PDP.2010.43 – ident: e_1_2_9_34_1 doi: 10.1109/IJCNN.2012.6252544 – ident: e_1_2_9_17_1 doi: 10.22260/ISARC2018/0094 – ident: e_1_2_9_27_1 doi: 10.11591/IJECE.V10I5.PP5479‐5486 – ident: e_1_2_9_21_1 – ident: e_1_2_9_25_1 doi: 10.1109/IACS.2018.8355444 – ident: e_1_2_9_37_1 doi: 10.1109/CVPR.2017.243 – ident: e_1_2_9_5_1 – ident: e_1_2_9_10_1 doi: 10.1145/2647868.2654889 – ident: e_1_2_9_15_1 doi: 10.1109/CVPR.2018.00474 – ident: e_1_2_9_44_1 – ident: e_1_2_9_8_1 – ident: e_1_2_9_30_1 doi: 10.1038/s41586‐020‐2649‐2 – ident: e_1_2_9_41_1 – ident: e_1_2_9_12_1 – ident: e_1_2_9_33_1 – ident: e_1_2_9_40_1 doi: 10.1214/aoms/1177729586 – ident: e_1_2_9_2_1 doi: 10.1109/ACCESS.2018.2830661 – ident: e_1_2_9_28_1 doi: 10.26650/acin.880918 – ident: e_1_2_9_38_1 doi: 10.1109/CVPR.2017.195 – ident: e_1_2_9_9_1 – ident: e_1_2_9_11_1 – ident: e_1_2_9_35_1 doi: 10.1109/CVPR.2009.5206848 – ident: e_1_2_9_26_1 doi: 10.1109/ICDCS.2018.00125 – ident: e_1_2_9_19_1 doi: 10.1109/ColComCon.2017.8088219 |
SSID | ssj0011031 |
Score | 2.3897836 |
Snippet | Summary
The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep... The easiest way to propose a solution based on deep neural networks is using the pre‐trained models through the transfer‐learning technique. Deep learning... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Publisher |
SubjectTerms | Artificial neural networks Benchmarks convolutional neural network Deep learning deep neural network Image classification Keras Machine learning Neural networks TensorFlow transfer‐learning |
Title | An experimental comparison of the widely used pre‐trained deep neural networks for image classification tasks towards revealing the promise of transfer‐learning |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcpe.7216 https://www.proquest.com/docview/2720651553 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NSsQwEA6yJy_-i-sfEcRb17Tpttuj6C4iKCIKgoeSn6mIWJdtF9GTj-BD-GQ-iTNpu6uCIJ5KaSdpk5nko535PsZ2bQY6TDCQjO8bL4RAeQq3PS8C1RMQayMyKnA-PYuOr8KT6-51nVVJtTAVP8TkgxtFhluvKcCVLvanpKFmCB1insHl15cxZXMdXUyYo3xSL6ioUgNPIGhveGdFsN8Yft-JpvDyK0h1u8xgnt00z1cll9x3xqXumJcf1I3_e4EFNleDT35Qecsim4F8ic03wg68jvNl9n6Q86_U_9xM1Ar5Y8YRM_Inosd65uMCLB-O4OP1zYlN4JkFGHKiyUS7vEoyLzhCY373gGsXNwTXKT_JuQQvVYGXS5e8W3Dik1JUIO_6wFFALwTXpcPXMMJ-ap2L2xV2NehfHh57tZyDZxBTRF4v7CVGqJAo8HyTABEiyVjYrpChtirQsVRJFgRWxwIyZSAJrVQZ7p4KfAtGrrJW_pjDGuPCaIUmElvKSL49kUkU29iIBBuJIGyznWZq02HF2pFW_MxBisOe0rC32WYz52kdt0VKf6VJHL4r22zPTd6v9unheZ-O63-9cYPNBlQ74QoZN1mrHI1hCxFNqbed734Cb-P6Fg |
link.rule.ids | 315,783,787,1378,27936,27937,46306,46730 |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dStxAFD5YvbA3_tSK6087hdK7rJNMNtnQK7HK2qqUouCFECYzJyJidtlkEb3yEXyIPlmfpOdMktUKBfEqhOTMJDPnZD4m53wfwGebYxYmFEjG940XYqA9TcueF6HuS4wzI3MucD46jgan4fez3tkMfG1rYWp-iOmGG0eG-15zgPOG9PYja6gZYZepZ97AHEW7Yt2Gb7-m3FE-6xfUZKmBJwm2t8yzMthuLf9dix4B5lOY6taZ_UU4b5-wTi-56k6qrGvunpE3vvIVlmChwZ9ip3aYZZjB4h0sttoOogn1Ffi9U4in7P_CTAULxTAXBBvFDTNk3YpJiVaMxvjn_sHpTdCZRRwJZsoku6LOMy8FoWNxeU2fL2EYsXOKkvMKUemSLlcuf7cUTCmluUbe9UHDQI6IrksHsXFM_TRSFxfv4XR_72R34DWKDp4hWBF5_bCfGKlDZsHzTYLMiaRiaXtShZnVQRYrneRBYLNYYq4NJqFVOqcFVKNv0ahVmC2GBa6BkCbTZKKopZwV3BOVRLGNjUyokQjDDnxq5zYd1cQdaU3RHKQ07CkPewc220lPm9AtU_4xzfrwPdWBL272_muf7v7c4-P6S2_8CPODk6PD9PDg-McGvA24lMLVNW7CbDWe4BYBnCr74Bz5L-bM_i4 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1fS9xAEB_0hOJL7T_x2mu7hdK3nJtkL7k8ivWwag8RBaEPYbM7ESmN4ZKjtE9-BD-En8xP0plNcmcLBfEphGR2k92Z3R_JzO8H8NHmmKmEAsn4vvEUBtrTtO15EeqxxDgzMucC56_TaP9MHZyPztusSq6FafghFh_cODLces0BXtp8e0kaakocMvPMKqypiIAvA6KTBXWUz_IFDVdq4ElC7R3xrAy2O8u_t6IlvryPUt02M9mAb90DNtkl34fzOhua3_9wNz7uDZ7B0xZ9ip3GXZ7DChYvYKNTdhBtoL-E251C3Of-F2YhVyiuckGgUfxkfqxfYl6hFeUM765vnNoEnVnEUjBPJtkVTZZ5JQgbi8sftHgJw3idE5ScT4haV3S5dtm7lWBCKc0V8q4PGgVyQ3RdOoCNM-qnFbq4eAVnk73T3X2v1XPwDIGKyBurcWKkVsyB55sEmREpjKUdyVBlVgdZHOokDwKbxRJzbTBRNtQ5bZ8afYsm3IRecVXgFghpMk0mIbWUs357EiZRbGMjE2okQtWHD93UpmVD25E2BM1BSsOe8rD3YdDNedoGbpXyb2lWhx-FffjkJu-_9unu8R4fXz_0xvfw5PjzJD36Mj18A-sB11G4osYB9OrZHN8Suqmzd86N_wAIkvzd |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+experimental+comparison+of+the+widely+used+pre%E2%80%90trained+deep+neural+networks+for+image+classification+tasks+towards+revealing+the+promise+of+transfer%E2%80%90learning&rft.jtitle=Concurrency+and+computation&rft.au=Kabakus%2C+Abdullah+Talha&rft.au=Erdogmus%2C+Pakize&rft.date=2022-11-01&rft.pub=John+Wiley+%26+Sons%2C+Inc&rft.issn=1532-0626&rft.eissn=1532-0634&rft.volume=34&rft.issue=24&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcpe.7216&rft.externalDBID=10.1002%252Fcpe.7216&rft.externalDocID=CPE7216 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1532-0626&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1532-0626&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1532-0626&client=summon |