Convolution-Transformer for Image Feature Extraction
This study addresses the limitations of Transformer models in image feature extraction, particularly their lack of inductive bias for visual structures. Compared to Convolutional Neural Networks (CNNs), the Transformers are more sensitive to different hyperparameters of optimizers, which leads to a...
Saved in:
Published in | Computer modeling in engineering & sciences Vol. 141; no. 1; pp. 87 - 106 |
---|---|
Main Authors | , , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Henderson
Tech Science Press
2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | This study addresses the limitations of Transformer models in image feature extraction, particularly their lack of inductive bias for visual structures. Compared to Convolutional Neural Networks (CNNs), the Transformers are more sensitive to different hyperparameters of optimizers, which leads to a lack of stability and slow convergence. To tackle these challenges, we propose the Convolution-based Efficient Transformer Image Feature Extraction Network (CEFormer) as an enhancement of the Transformer architecture. Our model incorporates E-Attention, depthwise separable convolution, and dilated convolution to introduce crucial inductive biases, such as translation invariance, locality, and scale invariance, into the Transformer framework. Additionally, we implement a lightweight convolution module to process the input images, resulting in faster convergence and improved stability. This results in an efficient convolution combined Transformer image feature extraction network. Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed. It achieves up to 85.0% accuracy across various model sizes on image classification, outperforming various baseline models. When integrated into the Mask Region-Convolutional Neural Network (R-CNN) framework as a backbone network, CEFormer outperforms other models and achieves the highest mean Average Precision (mAP) scores. This research presents a significant advancement in Transformer-based image feature extraction, balancing performance and computational efficiency. |
---|---|
AbstractList | This study addresses the limitations of Transformer models in image feature extraction, particularly their lack of inductive bias for visual structures. Compared to Convolutional Neural Networks (CNNs), the Transformers are more sensitive to different hyperparameters of optimizers, which leads to a lack of stability and slow convergence. To tackle these challenges, we propose the Convolution-based Efficient Transformer Image Feature Extraction Network (CEFormer) as an enhancement of the Transformer architecture. Our model incorporates E-Attention, depthwise separable convolution, and dilated convolution to introduce crucial inductive biases, such as translation invariance, locality, and scale invariance, into the Transformer framework. Additionally, we implement a lightweight convolution module to process the input images, resulting in faster convergence and improved stability. This results in an efficient convolution combined Transformer image feature extraction network. Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed. It achieves up to 85.0% accuracy across various model sizes on image classification, outperforming various baseline models. When integrated into the Mask Region-Convolutional Neural Network (R-CNN) framework as a backbone network, CEFormer outperforms other models and achieves the highest mean Average Precision (mAP) scores. This research presents a significant advancement in Transformer-based image feature extraction, balancing performance and computational efficiency. |
Author | AlQahtani, Salman A. AlSanad, Ahmed Li, Xiaolu Wang, Ruiyang Liu, Shan Zheng, Wenfeng Yang, Youshuai Yin, Zhengtong Yin, Lirong Chen, Xiaobing Yang, Bo Wang, Lei Lu, Siyu |
Author_xml | – sequence: 1 givenname: Lirong surname: Yin fullname: Yin, Lirong – sequence: 2 givenname: Lei surname: Wang fullname: Wang, Lei – sequence: 3 givenname: Siyu surname: Lu fullname: Lu, Siyu – sequence: 4 givenname: Ruiyang surname: Wang fullname: Wang, Ruiyang – sequence: 5 givenname: Youshuai surname: Yang fullname: Yang, Youshuai – sequence: 6 givenname: Bo surname: Yang fullname: Yang, Bo – sequence: 7 givenname: Shan surname: Liu fullname: Liu, Shan – sequence: 8 givenname: Ahmed surname: AlSanad fullname: AlSanad, Ahmed – sequence: 9 givenname: Salman A. surname: AlQahtani fullname: AlQahtani, Salman A. – sequence: 10 givenname: Zhengtong surname: Yin fullname: Yin, Zhengtong – sequence: 11 givenname: Xiaolu surname: Li fullname: Li, Xiaolu – sequence: 12 givenname: Xiaobing surname: Chen fullname: Chen, Xiaobing – sequence: 13 givenname: Wenfeng surname: Zheng fullname: Zheng, Wenfeng |
BookMark | eNp9kL1Ow0AQhE8oSCSBB6CzRO2w9xunRFECkSLRhPq0Oe8hR7Ev3NkI3h6bUCAKqtliZnb0TdioCQ0xdsthJoUBde9qSjMBQs1AcyjkBRtzLUzONZjRr_uKTVI6AEgji8WYqWVo3sOxa6vQ5LuITfIh1hSzXrJNja-UrQnbLlK2-mgjusF4zS49HhPd_OiUvaxXu-VTvn1-3CwftrmT3LQ5IdFCFQbnWhm-VwqhJCFkgeQcLuZlaTR3UAivwYFAKLD0zivnjdg77eWU3Z17TzG8dZRaewhdbPqXVgoALriUvHfNzy4XQ0qRvHVVi8POfm91tBzsNyI7ILIDIntG1Cf5n-QpVjXGz38yX5babGI |
CitedBy_id | crossref_primary_10_1063_5_0188259 crossref_primary_10_3390_electronics13152998 crossref_primary_10_1063_5_0208691 crossref_primary_10_1109_ACCESS_2024_3476082 crossref_primary_10_1007_s44196_024_00690_7 crossref_primary_10_1016_j_compag_2024_109431 crossref_primary_10_1007_s12601_024_00182_x crossref_primary_10_1155_jom_4423113 crossref_primary_10_1016_j_engappai_2024_109288 crossref_primary_10_7717_peerj_cs_2464 crossref_primary_10_1038_s41598_024_73674_4 crossref_primary_10_1109_JSTARS_2024_3492533 crossref_primary_10_1109_ACCESS_2024_3519308 crossref_primary_10_1109_ACCESS_2024_3452184 crossref_primary_10_1109_ACCESS_2025_3530242 crossref_primary_10_1038_s41598_024_68567_5 crossref_primary_10_7717_peerj_cs_2409 crossref_primary_10_1007_s10462_024_10912_1 crossref_primary_10_3389_fnbot_2024_1448538 crossref_primary_10_7717_peerj_cs_2476 crossref_primary_10_1007_s10462_024_10958_1 crossref_primary_10_3390_bioengineering11080740 crossref_primary_10_7717_peerj_cs_2495 crossref_primary_10_1109_ACCESS_2024_3457613 crossref_primary_10_1007_s44196_024_00664_9 |
Cites_doi | 10.1016/j.patcog.2022.108548 10.1016/j.engappai.2024.108146 10.1088/2058-9565/acb1d0 10.3390/rs14163914 10.1016/j.patcog.2022.108605 10.1364/OL.451777 10.1109/TPAMI.2023.3341806 10.1016/j.ins.2022.11.139 10.1016/j.neucom.2023.01.072 10.1016/j.neucom.2022.01.056 10.1016/j.neunet.2022.01.012 10.1109/CVPR52688.2022.01181 10.1016/j.aei.2023.102075 10.3390/rs14030645 10.1016/j.ins.2022.01.022 10.1038/s41598-022-19674-8 |
ContentType | Journal Article |
Copyright | 2024. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | AAYXX CITATION 7SC 7TB 8FD ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FR3 JQ2 KR7 L7M L~C L~D PHGZM PHGZT PIMPY PKEHL PQEST PQQKQ PQUKI PRINS |
DOI | 10.32604/cmes.2024.051083 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Mechanical & Transportation Engineering Abstracts Technology Research Database ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College ProQuest Central Engineering Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China |
DatabaseTitle | CrossRef Publicly Available Content Database Civil Engineering Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) Mechanical & Transportation Engineering Abstracts ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest Central China Computer and Information Systems Abstracts Professional ProQuest Central ProQuest One Academic UKI Edition ProQuest Central Korea Engineering Research Database ProQuest Central (New) ProQuest One Academic Advanced Technologies Database with Aerospace ProQuest One Academic (New) |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1526-1506 |
EndPage | 106 |
ExternalDocumentID | 10_32604_cmes_2024_051083 |
GroupedDBID | -~X AAFWJ AAYXX ACIWK ADMLS AFKRA ALMA_UNASSIGNED_HOLDINGS BENPR CCPQU CITATION EBS EJD F5P IPNFZ J9A OK1 PHGZM PHGZT PIMPY RTS 7SC 7TB 8FD ABUWG AZQEC DWQXO FR3 JQ2 KR7 L7M L~C L~D PKEHL PQEST PQQKQ PQUKI PRINS |
ID | FETCH-LOGICAL-c316t-eaee9486a75461b44a0de2238aecca97dd651c082f50c02a08adfcf4cf62bc5f3 |
IEDL.DBID | BENPR |
ISSN | 1526-1506 1526-1492 |
IngestDate | Mon Jun 30 11:02:15 EDT 2025 Thu Apr 24 23:05:08 EDT 2025 Tue Jul 01 05:24:10 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 1 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c316t-eaee9486a75461b44a0de2238aecca97dd651c082f50c02a08adfcf4cf62bc5f3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
OpenAccessLink | https://www.proquest.com/docview/3200121331?pq-origsite=%requestingapplication% |
PQID | 3200121331 |
PQPubID | 2048798 |
PageCount | 20 |
ParticipantIDs | proquest_journals_3200121331 crossref_citationtrail_10_32604_cmes_2024_051083 crossref_primary_10_32604_cmes_2024_051083 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-00-00 20240101 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – year: 2024 text: 2024-00-00 |
PublicationDecade | 2020 |
PublicationPlace | Henderson |
PublicationPlace_xml | – name: Henderson |
PublicationTitle | Computer modeling in engineering & sciences |
PublicationYear | 2024 |
Publisher | Tech Science Press |
Publisher_xml | – name: Tech Science Press |
References | Kim (ref1) 2022 Chalavadi (ref21) 2022; 126 Ikotun (ref12) 2023; 622 Han (ref39) 2021; 34 Mansuroglu (ref22) 2023; 8 Tan (ref8) 2022; 14 ref50 Zhang (ref5) 2022; 13682 Bian (ref11) 2022; 47 Wang (ref37) 2021 Zhou (ref6) 2022 Heo (ref44) 2021 ref47 ref42 Wang (ref15) 2024; 133 ref41 Huang (ref33) 2022 Liu (ref38) 2021 Chen (ref49) 2021 Guo (ref48) 2022 Boutros (ref2) 2022 Zhang (ref18) 2022 Touvron (ref40) 2021 Dosovitskiy (ref35) 2021 Jiang (ref46) 2021; 34 Qiu (ref23) 2022; 14 Guo (ref24) 2022; 12 Xiao (ref10) 2022; 591 Yuan (ref17) 2021 ref34 ref36 Chen (ref31) 2021 ref30 Li (ref26) 2022 Chen (ref16) 2021; 34 Vaswani (ref13) 2017 He (ref20) 2023; 530 Ci (ref25) 2022 Bae (ref3) 2023 Zhang (ref32) 2022; 36 Ma (ref4) 2022 Alinezhad Noghre (ref27) 2022 Chen (ref19) 2021 Ascoli (ref43) 2021 Liang (ref14) 2023; 57 Sun (ref28) 2022; 148 Song (ref7) 2022; 482 Wu (ref45) 2021 Wang (ref9) 2022; 128 Radosavovic (ref29) 2020 |
References_xml | – start-page: 589 year: 2021 ident: ref19 article-title: Visformer: the vision-friendly transformer – volume: 126 start-page: 108548 year: 2022 ident: ref21 article-title: mSODANet: a network for multi-scale object detection in aerial images using hierarchical dilated convolutions publication-title: Pattern Recognit doi: 10.1016/j.patcog.2022.108548 – start-page: 10012 year: 2021 ident: ref38 article-title: Swin transformer: hierarchical vision transformer using shifted windows – start-page: 12165 year: 2022 ident: ref48 article-title: CMT: convolutional neural networks meet vision transformers – start-page: 12270 year: 2021 ident: ref31 article-title: Autoformer: searching transformers for visual recognition – volume: 133 start-page: 108146 year: 2024 ident: ref15 article-title: Single and simultaneous fault diagnosis of gearbox via deep learning-based feature learning publication-title: Eng Appl Artif Intell doi: 10.1016/j.engappai.2024.108146 – volume: 13682 year: 2022 ident: ref5 publication-title: Lecture notes in computer science – volume: 8 start-page: 25006 year: 2023 ident: ref22 article-title: Variational Hamiltonian simulation for translational invariant systems via classical pre-processing publication-title: Quantum Sci Technol doi: 10.1088/2058-9565/acb1d0 – start-page: 32 year: 2021 ident: ref40 article-title: Going deeper with image transformers – volume: 14 start-page: 3914 year: 2022 ident: ref23 article-title: MSL-Net: an efficient network for building extraction from aerial imagery publication-title: Remote Sens doi: 10.3390/rs14163914 – start-page: 2286 year: 2021 ident: ref43 article-title: ConViT: improving vision transformers with soft convolutional inductive biases – volume: 128 start-page: 108605 year: 2022 ident: ref9 article-title: High quality proposal feature generation for crowded pedestrian detection publication-title: Pattern Recognit doi: 10.1016/j.patcog.2022.108605 – year: 2021 ident: ref35 article-title: An image is worth 16 × 16 words: transformers for image recognition at scale – volume: 47 start-page: 1343 year: 2022 ident: ref11 article-title: Image-free multi-character recognition publication-title: Opt Lett doi: 10.1364/OL.451777 – start-page: 6000 year: 2017 ident: ref13 publication-title: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17) – start-page: 8781 year: 2022 ident: ref4 article-title: Unified transformer tracker for object tracking – ident: ref30 – ident: ref36 – volume: 34 year: 2021 ident: ref46 publication-title: Advances in neural information processing systems – ident: ref34 doi: 10.1109/TPAMI.2023.3341806 – volume: 36 start-page: 3417 year: 2022 ident: ref32 article-title: Nested hierarchical transformer: towards accurate, data-efficient and interpretable visual understanding publication-title: Proc AAAI Conf Artif Intell – start-page: 3526 year: 2023 ident: ref3 article-title: DigiFace-1M: 1 million digital face images for face recognition – start-page: 21330 year: 2022 ident: ref26 article-title: BigDatasetGAN: synthesizing ImageNet with pixel-wise annotations – ident: ref50 – volume: 622 start-page: 178 year: 2023 ident: ref12 article-title: K-means clustering algorithms: a comprehensive review, variants analysis, and advances in the era of big data publication-title: Inf Sci doi: 10.1016/j.ins.2022.11.139 – start-page: 10428 year: 2020 ident: ref29 article-title: Designing network design spaces – ident: ref47 – volume: 34 year: 2021 ident: ref39 publication-title: Advances in neural information processing systems – volume: 530 start-page: 48 year: 2023 ident: ref20 article-title: A multiscale intrusion detection system based on pyramid depthwise separable convolution neural network publication-title: Neurocomputing doi: 10.1016/j.neucom.2023.01.072 – volume: 482 start-page: 98 year: 2022 ident: ref7 article-title: PRNet++: learning towards generalized occluded pedestrian detection via progressive refinement network publication-title: Neurocomputing doi: 10.1016/j.neucom.2022.01.056 – start-page: 290 year: 2022 ident: ref25 publication-title: Computer vision–ECCV 2022 – start-page: 558 year: 2021 ident: ref17 article-title: Tokens-to-token ViT: training vision transformers from scratch on imagenet – ident: ref41 – start-page: 12 year: 2021 ident: ref49 article-title: GLiT: neural architecture search for global and local image transformer – volume: 148 start-page: 155 year: 2022 ident: ref28 article-title: Low-degree term first in ResNet, its variants and the whole neural network family publication-title: Neural Netw doi: 10.1016/j.neunet.2022.01.012 – start-page: 18750 year: 2022 ident: ref1 article-title: AdaFace: quality adaptive margin for face recognition – start-page: 8944 year: 2022 ident: ref18 article-title: Bootstrapping ViTs: towards liberating vision transformers from pre-training – ident: ref42 doi: 10.1109/CVPR52688.2022.01181 – start-page: 568 year: 2021 ident: ref37 article-title: Pyramid vision transformer: a versatile backbone for dense prediction without convolutions – start-page: 295 year: 2022 ident: ref33 publication-title: Pattern recognition and artificial intelligence – start-page: 22 year: 2021 ident: ref45 article-title: CVT: introducing convolutions to vision transformers – volume: 57 start-page: 102075 year: 2023 ident: ref14 article-title: Fault transfer diagnosis of rolling bearings across multiple working conditions via subdomain adaptation and improved vision transformer network publication-title: Adv Eng Inform doi: 10.1016/j.aei.2023.102075 – start-page: 258 year: 2022 ident: ref27 publication-title: Pattern recognition and artificial intelligence – start-page: 1578 year: 2022 ident: ref2 article-title: ElasticFace: elastic margin loss for deep face recognition – volume: 34 year: 2021 ident: ref16 publication-title: Advances in neural information processing systems – volume: 14 start-page: 645 year: 2022 ident: ref8 article-title: 3D sensor based pedestrian detection by integrating improved HHA encoding and two-branch feature fusion publication-title: Remote Sens doi: 10.3390/rs14030645 – volume: 591 start-page: 128 year: 2022 ident: ref10 article-title: Dynamic graph computing: a method of finding companion vehicles from traffic streaming data publication-title: Inf Sci doi: 10.1016/j.ins.2022.01.022 – start-page: 8531 year: 2022 ident: ref6 article-title: PTTR: relational 3D point cloud object tracking with transformer – volume: 12 start-page: 15523 year: 2022 ident: ref24 article-title: Road damage detection algorithm for improved YOLOv5 publication-title: Sci Rep doi: 10.1038/s41598-022-19674-8 – start-page: 11936 year: 2021 ident: ref44 article-title: Rethinking spatial dimensions of vision transformers |
SSID | ssj0036389 |
Score | 2.536641 |
Snippet | This study addresses the limitations of Transformer models in image feature extraction, particularly their lack of inductive bias for visual structures.... |
SourceID | proquest crossref |
SourceType | Aggregation Database Enrichment Source Index Database |
StartPage | 87 |
SubjectTerms | Accuracy Artificial neural networks Bias Convergence Feature extraction Image classification Invariance Neural networks Scale invariance Stability |
Title | Convolution-Transformer for Image Feature Extraction |
URI | https://www.proquest.com/docview/3200121331 |
Volume | 141 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV09T8MwELWgXVj4RhQKysCEZBrbl68JQdWqIFEh1ErdIsexJ5qWNkX8fHyJA-rSKUOcKLqz37s7O-8IudNG2jRZcSwvJRQCCVSCTVa40JaPImmYxjrk2zgcTeF1FsxcwW3tjlU2mFgBdb5QWCPvCV7Ljwn2uPyi2DUKd1ddC4190rYQHMct0n4ejN8_GiwWyMeVYioPqf0YXu9r2pDFh56aa9Tr5vCAEzMW28y0DcwV2wyPyaELE72n2q8nZE8Xp-SoacHguRV5RqC_KL7d7KGTJgi1I-zFe5lbsPAwyNustDf4KVf1XwznZDocTPoj6hohUCVYWFIttU4gDmUUQMgyAOnn2vJ6LNEBSZTnYcCUJXMT-Mrn0o9lbpQBZUKeqcCIC9IqFoW-xJNMPpM6YjJXAAKluSOT5GCzkIwzmQQd4jdGSJVTCcdmFZ-pzRYqu6VotxTtltZ265D7v0eWtUTGrsHdxrKpWy3r9N-3V7tvX5MDfFddAumSVrna6BsbFJTZrfP8L_TQtOM |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07T8MwED6VdoCFN6JQIAMsSKGJ7bwGhKC0aulDCLVSt-A4zkQftCmPP8VvxJcHqEu3Th7sWPKX8313Z_sO4FJGXLnJgmB4ydOZxZnOmXJWCJWKjxwemRLjkN2e3Rywp6E1LMBP_hYGr1XmOjFR1OFEYIy8Skmafoyad9N3HatG4elqXkIjFYu2_P5ULtv8tvWo_u8VIY16v9bUs6oCuqCmHeuSS-kx1-aOxWwzYIwboVQk6XJcjeeEoW2ZQjFjZBnCINxweRiJiInIJoGwIqrm3YASo7ZBilB6qPeeX3LdT5H_kwytxNbV4kl6jqpMJINVxUhifnDCbnAjuHSZCZeJIGG3xi5sZ2apdp_K0R4U5HgfdvKSD1qmAQ6A1Sbjj0xa9X5u9KoRqtFaI6WcNDQqFzOp1b_iWfpq4hAGa4HoCIrjyVge480pw-TSMXkoGKOYCtyJvJAprycgJvesMhg5CL7IspJjcYw3X3knCW4-4uYjbn6KWxmu_z6Zpik5Vg2u5Mj62e6c-_-ydLK6-wI2m_1ux--0eu1T2MJ50_BLBYrxbCHPlEESB-eZFGjwum7B-wUNmfOk |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Convolution-Transformer+for+Image+Feature+Extraction&rft.jtitle=Computer+modeling+in+engineering+%26+sciences&rft.au=Yin%2C+Lirong&rft.au=Wang%2C+Lei&rft.au=Lu%2C+Siyu&rft.au=Wang%2C+Ruiyang&rft.date=2024&rft.issn=1526-1506&rft.eissn=1526-1506&rft.volume=141&rft.issue=1&rft.spage=87&rft.epage=106&rft_id=info:doi/10.32604%2Fcmes.2024.051083&rft.externalDBID=n%2Fa&rft.externalDocID=10_32604_cmes_2024_051083 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1526-1506&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1526-1506&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1526-1506&client=summon |