Neuron Perception Inspired EEG Emotion Recognition With Parallel Contrastive Learning
Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 36; no. 8; pp. 14049 - 14062 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 2162-237X 2162-2388 2162-2388 |
DOI | 10.1109/TNNLS.2025.3546283 |
Cover
Loading…
Abstract | Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective processing in the human brain. To address this issue, we propose the parallel contrastive multisource domain adaptation (PCMDA) model, inspired by the neural representation mechanism in the ventral visual cortex. Our model employs a neuron-perception-inspired contrastive learning architecture for EEG-based emotion recognition in subject-independent scenarios. A two-stage alignment methodology is employed for the purpose of aligning numerous source domains with the target domain. This approach integrates a parallel contrastive loss (PCL) which simulates the self-supervised learning mechanism inherent in the neural representation of the human brain. Furthermore, a self-attention mechanism is integrated to extract emotion weights for each frequency band. Extensive experiments were conducted on three publicly available EEG emotion datasets, SJTU emotion EEG dataset (SEED), database for emotion analysis using physiological signals (DEAP), and finer-grained affective computing EEG dataset (FACED), to evaluate our proposed method. The results demonstrate that the PCMDA effectively utilizes the unique EEG features and frequency band information of each subject, leading to improved generalization across different subjects in comparison to other methods. |
---|---|
AbstractList | Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective processing in the human brain. To address this issue, we propose the parallel contrastive multisource domain adaptation (PCMDA) model, inspired by the neural representation mechanism in the ventral visual cortex. Our model employs a neuron-perception-inspired contrastive learning architecture for EEG-based emotion recognition in subject-independent scenarios. A two-stage alignment methodology is employed for the purpose of aligning numerous source domains with the target domain. This approach integrates a parallel contrastive loss (PCL) which simulates the self-supervised learning mechanism inherent in the neural representation of the human brain. Furthermore, a self-attention mechanism is integrated to extract emotion weights for each frequency band. Extensive experiments were conducted on three publicly available EEG emotion datasets, SJTU emotion EEG dataset (SEED), database for emotion analysis using physiological signals (DEAP), and finer-grained affective computing EEG dataset (FACED), to evaluate our proposed method. The results demonstrate that the PCMDA effectively utilizes the unique EEG features and frequency band information of each subject, leading to improved generalization across different subjects in comparison to other methods. Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective processing in the human brain. To address this issue, we propose the parallel contrastive multisource domain adaptation (PCMDA) model, inspired by the neural representation mechanism in the ventral visual cortex. Our model employs a neuron-perception-inspired contrastive learning architecture for EEG-based emotion recognition in subject-independent scenarios. A two-stage alignment methodology is employed for the purpose of aligning numerous source domains with the target domain. This approach integrates a parallel contrastive loss (PCL) which simulates the self-supervised learning mechanism inherent in the neural representation of the human brain. Furthermore, a self-attention mechanism is integrated to extract emotion weights for each frequency band. Extensive experiments were conducted on three publicly available EEG emotion datasets, SJTU emotion EEG dataset (SEED), database for emotion analysis using physiological signals (DEAP), and finer-grained affective computing EEG dataset (FACED), to evaluate our proposed method. The results demonstrate that the PCMDA effectively utilizes the unique EEG features and frequency band information of each subject, leading to improved generalization across different subjects in comparison to other methods.Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective processing in the human brain. To address this issue, we propose the parallel contrastive multisource domain adaptation (PCMDA) model, inspired by the neural representation mechanism in the ventral visual cortex. Our model employs a neuron-perception-inspired contrastive learning architecture for EEG-based emotion recognition in subject-independent scenarios. A two-stage alignment methodology is employed for the purpose of aligning numerous source domains with the target domain. This approach integrates a parallel contrastive loss (PCL) which simulates the self-supervised learning mechanism inherent in the neural representation of the human brain. Furthermore, a self-attention mechanism is integrated to extract emotion weights for each frequency band. Extensive experiments were conducted on three publicly available EEG emotion datasets, SJTU emotion EEG dataset (SEED), database for emotion analysis using physiological signals (DEAP), and finer-grained affective computing EEG dataset (FACED), to evaluate our proposed method. The results demonstrate that the PCMDA effectively utilizes the unique EEG features and frequency band information of each subject, leading to improved generalization across different subjects in comparison to other methods. |
Author | Wang, Zhe Xu, Jiazhen Xie, Li Li, Dongdong Huang, Shengyao |
Author_xml | – sequence: 1 givenname: Dongdong orcidid: 0000-0002-1880-8054 surname: Li fullname: Li, Dongdong email: ldd@ecust.edu.cn organization: Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China – sequence: 2 givenname: Shengyao surname: Huang fullname: Huang, Shengyao email: y30231017@mail.ecust.edu.cn organization: Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China – sequence: 3 givenname: Li orcidid: 0009-0005-0005-5153 surname: Xie fullname: Xie, Li email: y80200051@mail.ecust.edu.cn organization: Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China – sequence: 4 givenname: Zhe orcidid: 0000-0002-3759-2041 surname: Wang fullname: Wang, Zhe email: wangzhe@ecust.edu.cn organization: Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China – sequence: 5 givenname: Jiazhen surname: Xu fullname: Xu, Jiazhen email: 18253199502@163.com organization: Cancer Institute, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40085465$$D View this record in MEDLINE/PubMed |
BookMark | eNpNkF1LwzAUhoNM3Jz7AyLSS28289E0zaWMOQdjDt3Qu5K2JzPSpjXpBP-93YfDc5OX8LznwHOJOraygNA1wSNCsLxfLRbz1xHFlI8YDyMaszPUoySiQ8riuHPK4r2LBt5_4nYizKNQXqBuiHHclngPrRewdZUNluAyqBvTxpn1tXGQB5PJNJiU1f7zBbJqY80-v5nmI1gqp4oCimBc2cYp35hvCOagnDV2c4XOtSo8DI5vH60fJ6vx03D-PJ2NH-bDjArRDDnJsJZapIzpnDPMctBxKLHUmVYpyTWRGRaCk1RyGmkWC6XTGPKQK5xi4KyP7g57a1d9bcE3SWl8BkWhLFRbnzAiRBhjwlmL3h7RbVpCntTOlMr9JH8qWoAegMxV3jvQJ4TgZKc82StPdsqTo_K2dHMoGQD4V5A0ku3VX7LXfM4 |
CODEN | ITNNAL |
Cites_doi | 10.1109/TIM.2024.3398103 10.1007/BF00422717 10.1109/CVPR42600.2020.00975 10.1038/s41597-023-02650-w 10.1109/T-AFFC.2011.15 10.1016/j.inffus.2023.101847 10.1007/978-3-030-36708-4_3 10.1109/TCBB.2020.3018137 10.1109/TNNLS.2020.3044215 10.1109/TAMD.2015.2431497 10.1016/j.inffus.2023.102019 10.1109/TCDS.2018.2826840 10.1007/s40846-018-0425-7 10.1109/TAFFC.2020.2994159 10.1016/j.compbiomed.2021.105080 10.1109/TCDS.2020.2999337 10.3389/fnins.2021.778488 10.1109/TNNLS.2022.3145034 10.1109/TAFFC.2022.3164516 10.1137/1118101 10.1088/1741-2552/aace8c 10.3354/cr030079 10.1371/journal.pcbi.1011506 10.1016/j.physa.2022.127700 10.1016/B978-0-12-801851-4.00001-X 10.1109/TAFFC.2018.2885474 10.1016/j.measurement.2019.107003 10.1016/S1388-2457(00)00527-7 10.1007/978-3-319-49409-8_35 10.1109/ICASSP43922.2022.9746600 10.1109/TNSRE.2021.3111689 10.1109/TAFFC.2023.3288118 10.1162/jocn_a_00969 10.1016/j.patcog.2021.108430 10.5555/2946645.2946704 10.1177/10731911221134601 10.5555/3524938.3525087 10.1109/MCI.2015.2501545 10.1109/JSEN.2022.3144317 10.1016/j.tics.2016.08.003 10.1007/s10044-019-00860-w 10.1109/TETCI.2020.2997031 10.1109/SMC.2019.8914645 10.3390/brainsci11111392 10.1016/j.cortex.2017.01.009 10.1093/bioinformatics/btl242 10.3389/fpsyg.2017.01454 10.1109/TKDE.2021.3090866 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
DOI | 10.1109/TNNLS.2025.3546283 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 14062 |
ExternalDocumentID | 40085465 10_1109_TNNLS_2025_3546283 10926915 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: Natural Science Foundation of China grantid: 62276098; 62376095 funderid: 10.13039/501100001809 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG CGR CUY CVF ECM EIF NPM 7X8 |
ID | FETCH-LOGICAL-c277t-51c0f9f7b33fd5303def84909fcfab1df19c07751b9526f387afb8ed45a0b0e53 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Thu Jul 10 19:35:07 EDT 2025 Thu Aug 07 06:25:58 EDT 2025 Thu Aug 14 00:12:18 EDT 2025 Wed Aug 27 02:00:10 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c277t-51c0f9f7b33fd5303def84909fcfab1df19c07751b9526f387afb8ed45a0b0e53 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0002-3759-2041 0000-0002-1880-8054 0009-0005-0005-5153 |
PMID | 40085465 |
PQID | 3177480153 |
PQPubID | 23479 |
PageCount | 14 |
ParticipantIDs | pubmed_primary_40085465 ieee_primary_10926915 proquest_miscellaneous_3177480153 crossref_primary_10_1109_TNNLS_2025_3546283 |
PublicationCentury | 2000 |
PublicationDate | 2025-08-01 |
PublicationDateYYYYMMDD | 2025-08-01 |
PublicationDate_xml | – month: 08 year: 2025 text: 2025-08-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transaction on neural networks and learning systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2025 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref13 ref12 ref15 ref14 ref53 ref52 ref11 ref10 ref17 ref16 ref19 ref51 ref50 van den Oord (ref18) 2018 Lawhern (ref54) 2018; 15 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref38 Choi (ref23); 36 Tzeng (ref40) 2014 Long (ref39) ref24 ref26 ref25 ref20 ref22 ref28 ref27 Van der Maaten (ref43) 2008; 9 ref29 Bakhtiari (ref21); 34 |
References_xml | – volume: 36 start-page: 50408 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref23 article-title: A dual-stream neural network explains the functional segregation of dorsal and ventral visual pathways in human brains – ident: ref52 doi: 10.1109/TIM.2024.3398103 – ident: ref26 doi: 10.1007/BF00422717 – ident: ref16 doi: 10.1109/CVPR42600.2020.00975 – ident: ref25 doi: 10.1038/s41597-023-02650-w – ident: ref24 doi: 10.1109/T-AFFC.2011.15 – ident: ref5 doi: 10.1016/j.inffus.2023.101847 – ident: ref9 doi: 10.1007/978-3-030-36708-4_3 – ident: ref38 doi: 10.1109/TCBB.2020.3018137 – ident: ref14 doi: 10.1109/TNNLS.2020.3044215 – ident: ref7 doi: 10.1109/TAMD.2015.2431497 – ident: ref1 doi: 10.1016/j.inffus.2023.102019 – ident: ref49 doi: 10.1109/TCDS.2018.2826840 – start-page: 97 volume-title: Proc. 32nd Int. Conf. Mach. Learn. ident: ref39 article-title: Learning transferable features with deep adaptation networks – ident: ref27 doi: 10.1007/s40846-018-0425-7 – ident: ref13 doi: 10.1109/TAFFC.2020.2994159 – volume: 34 start-page: 25164 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref21 article-title: The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning – ident: ref46 doi: 10.1016/j.compbiomed.2021.105080 – ident: ref12 doi: 10.1109/TCDS.2020.2999337 – ident: ref33 doi: 10.3389/fnins.2021.778488 – ident: ref15 doi: 10.1109/TNNLS.2022.3145034 – ident: ref20 doi: 10.1109/TAFFC.2022.3164516 – ident: ref32 doi: 10.1137/1118101 – volume: 15 issue: 5 year: 2018 ident: ref54 article-title: EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces publication-title: J. Neural Eng. doi: 10.1088/1741-2552/aace8c – ident: ref35 doi: 10.3354/cr030079 – ident: ref22 doi: 10.1371/journal.pcbi.1011506 – ident: ref51 doi: 10.1016/j.physa.2022.127700 – ident: ref3 doi: 10.1016/B978-0-12-801851-4.00001-X – ident: ref11 doi: 10.1109/TAFFC.2018.2885474 – ident: ref50 doi: 10.1016/j.measurement.2019.107003 – ident: ref53 doi: 10.1016/S1388-2457(00)00527-7 – ident: ref41 doi: 10.1007/978-3-319-49409-8_35 – ident: ref48 doi: 10.1109/ICASSP43922.2022.9746600 – ident: ref45 doi: 10.1109/TNSRE.2021.3111689 – ident: ref19 doi: 10.1109/TAFFC.2023.3288118 – ident: ref30 doi: 10.1162/jocn_a_00969 – ident: ref42 doi: 10.1016/j.patcog.2021.108430 – ident: ref10 doi: 10.5555/2946645.2946704 – ident: ref6 doi: 10.1177/10731911221134601 – ident: ref17 doi: 10.5555/3524938.3525087 – year: 2014 ident: ref40 article-title: Deep domain confusion: Maximizing for domain invariance publication-title: arXiv:1412.3474 – year: 2018 ident: ref18 article-title: Representation learning with contrastive predictive coding publication-title: arXiv:1807.03748 – ident: ref8 doi: 10.1109/MCI.2015.2501545 – ident: ref47 doi: 10.1109/JSEN.2022.3144317 – ident: ref29 doi: 10.1016/j.tics.2016.08.003 – ident: ref37 doi: 10.1007/s10044-019-00860-w – ident: ref36 doi: 10.1109/TETCI.2020.2997031 – ident: ref4 doi: 10.1109/SMC.2019.8914645 – volume: 9 start-page: 2579 issue: 86 year: 2008 ident: ref43 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref44 doi: 10.3390/brainsci11111392 – ident: ref28 doi: 10.1016/j.cortex.2017.01.009 – ident: ref34 doi: 10.1093/bioinformatics/btl242 – ident: ref2 doi: 10.3389/fpsyg.2017.01454 – ident: ref31 doi: 10.1109/TKDE.2021.3090866 |
SSID | ssj0000605649 |
Score | 2.5005593 |
Snippet | Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 14049 |
SubjectTerms | Adaptation models Algorithms Brain modeling Contrastive learning Databases, Factual electroencephalogram (EEG) Electroencephalography Electroencephalography - methods Emotion recognition Emotions - physiology Feature extraction Humans Machine Learning multisource domain adaptation (DA) Neural Networks, Computer Neurons - physiology Physiology Streams Visual perception Visualization |
Title | Neuron Perception Inspired EEG Emotion Recognition With Parallel Contrastive Learning |
URI | https://ieeexplore.ieee.org/document/10926915 https://www.ncbi.nlm.nih.gov/pubmed/40085465 https://www.proquest.com/docview/3177480153 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELagEwvlUaC8ZCQ2lJCXk3hEqKUgiCpoRbcojs-AQClq04Vfj89JqgqpEluG2El8du47-77vCLkMtVdTHGJLsiizAqnDHR55ygqZiGMFQnKB5OSnJByMg4cJm9RkdcOFAQCTfAY2XpqzfDnNF7hVplc490KOlPJNHblVZK3lhoqjgXlo4K7nhp7l-dGkIck4_HqUJI8vOhz0mO0z5GNiAZ0AAUeAfmXFJ5kiK-vxpvE7_TZJmjeu0k0-7UUp7Pznj5jjvz9ph2zXCJTeVFNml2xAsUfaTXUHWi_2fTI2uh0FHS5TX-h9gefyIGmvd0d7VQEg-tykIOnr14_ynQ6zGRZo-aKofDXL5vhDpbWO61uHjPu90e3AqoswWLkXRaXF3NxRXEXC95Vk2uFJUHHAHa5ylQlXKpfnKKPnCs68UPlxlCkRgwxY5ggHmH9AWsW0gCNCHV-4oHRLVwkdtgJ3M42-4pwxCLnuvUuuGjOk35XWRmpiFIenxn4p2i-t7dclHRzOlTurkeySi8Z0qV4qeP6RFTBdzFMNlSJUy2G67WFl02XrZiocr-n1hGzhw6vUv1PSKmcLONNwpBTnZhr-ArdB2eE |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV09T8MwELUQDLBQvimfRmJDKXESJ_GIUKFAiRC0olsUx2dAoBS16cKvx-ckFUJCYssQW47Pzr2z770j5DQ0Xk0LiB3Fo8wJlAl3RORpJ-QyjjVIJSSSk--TsDcMbkd8VJPVLRcGAGzyGXTw0d7lq3E-w6Mys8OFFwqklC8Zx89ZRdeaH6m4BpqHFvB6LPQcz49GDU3GFeeDJOk_mYDQ4x2fIyMTS-gECDkC9Cw_vJIts_I34rSe56pFkmbMVcLJe2dWyk7-9UvO8d8ftUZWawxKL6pFs04WoNggraa-A623-yYZWuWOgj7Mk1_oTYE386Bot3tNu1UJIPrYJCGZ5-e38pU-ZBMs0fJBUftqkk3xl0prJdeXLTK86g4ue05dhsHJvSgqHc5yVwsdSd_XihuXp0DHgXCFznUmmdJM5Cikx6TgXqj9OMq0jEEFPHOlC9zfJovFuIBdQl1fMtCmJdPSBK4gWGbwV5xzDqEwvbfJWWOG9LNS20htlOKK1NovRfultf3aZAun88eb1Uy2yUljutRsFrwByQoYz6apAUsR6uVw03ansum8dbMU9v7o9Zgs9wb3_bR_k9ztkxUcSJUIeEAWy8kMDg04KeWRXZLfQNLdKg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Neuron+Perception+Inspired+EEG+Emotion+Recognition+With+Parallel+Contrastive+Learning&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Li%2C+Dongdong&rft.au=Huang%2C+Shengyao&rft.au=Xie%2C+Li&rft.au=Wang%2C+Zhe&rft.date=2025-08-01&rft.issn=2162-2388&rft.eissn=2162-2388&rft.volume=PP&rft_id=info:doi/10.1109%2FTNNLS.2025.3546283&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |