MMDA: A Multimodal and Multisource Domain Adaptation Method for Cross-Subject Emotion Recognition From EEG and Eye Movement Signals
Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative information about human emotional states. However, most current works rely on a subject-dependent approach, limiting their applicability to new...
Saved in:
Published in | IEEE transactions on computational social systems pp. 1 - 14 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2024
|
Subjects | |
Online Access | Get full text |
ISSN | 2329-924X 2373-7476 |
DOI | 10.1109/TCSS.2024.3519300 |
Cover
Loading…
Abstract | Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative information about human emotional states. However, most current works rely on a subject-dependent approach, limiting their applicability to new users. Recently, some studies have explored multimodal domain adaptation to address the mentioned issue by transferring information from known subjects to new ones. Unfortunately, existing methods are still exposed to negative transfer as a suboptimal distribution alignment is performed between subjects, while irrelevant information is not discarded. In this article, we present a multimodal and multisource domain adaptation (MMDA) method, which adopts the following three strategies: 1) marginal and conditional distribution alignments must be performed between each known subject and a new one; 2) relevant distribution alignments must be prioritized to avoid a negative transfer; and 3) modality fusion results should be improved by extracting more discriminative features from EEG signals and selecting relevant features across modalities. Our proposed method was evaluated with leave-one-subject-out cross validation on four public datasets: SEED, SEED-GER, SEED-IV, and SEED-V. Experimental results show that our proposal outperforms state-of-the-art results for each dataset when subject data from different sessions are combined into a single dataset. Moreover, MMDA exceeds the state of the art in 8 out of 11 different sessions when each session is evaluated. |
---|---|
AbstractList | Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative information about human emotional states. However, most current works rely on a subject-dependent approach, limiting their applicability to new users. Recently, some studies have explored multimodal domain adaptation to address the mentioned issue by transferring information from known subjects to new ones. Unfortunately, existing methods are still exposed to negative transfer as a suboptimal distribution alignment is performed between subjects, while irrelevant information is not discarded. In this article, we present a multimodal and multisource domain adaptation (MMDA) method, which adopts the following three strategies: 1) marginal and conditional distribution alignments must be performed between each known subject and a new one; 2) relevant distribution alignments must be prioritized to avoid a negative transfer; and 3) modality fusion results should be improved by extracting more discriminative features from EEG signals and selecting relevant features across modalities. Our proposed method was evaluated with leave-one-subject-out cross validation on four public datasets: SEED, SEED-GER, SEED-IV, and SEED-V. Experimental results show that our proposal outperforms state-of-the-art results for each dataset when subject data from different sessions are combined into a single dataset. Moreover, MMDA exceeds the state of the art in 8 out of 11 different sessions when each session is evaluated. |
Author | Jimenez-Guarneros, Magdiel Fuentes-Pineda, Gibran Grande-Barreto, Jonas |
Author_xml | – sequence: 1 givenname: Magdiel orcidid: 0000-0001-9675-7494 surname: Jimenez-Guarneros fullname: Jimenez-Guarneros, Magdiel email: mjmnzg@gmail.com organization: Department of Computer Science, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, Mexico – sequence: 2 givenname: Gibran orcidid: 0000-0002-1964-8208 surname: Fuentes-Pineda fullname: Fuentes-Pineda, Gibran email: gibranfp@unam.mx organization: Department of Computer Science, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, Mexico – sequence: 3 givenname: Jonas orcidid: 0000-0003-3789-1479 surname: Grande-Barreto fullname: Grande-Barreto, Jonas email: jonas.barreto385@uppuebla.edu.mx organization: Information Technology Engineering, Universidad Politécnica de Puebla, Cuanalá, Puebla, Mexico |
BookMark | eNpNkM1qwkAUhYdiodb6AIUu5gVi59dJugsabcFQaCx0FyYzNzZiZiSJBdd98Rp10dU5F845XL57NHDeAUKPlEwoJdHzepZlE0aYmHBJI07IDRoyrnighJoOes-iIGLi6w6N23ZLCKFMSsXIEP2m6Tx-wTFOD7uuqr3VO6ydvZytPzQG8NzXunI4tnrf6a7yDqfQfXuLS9_gWePbNsgOxRZMh5PanwMfYPzGVWe_aHyNk2R53k2OgFP_AzW4DmfVxuld-4Buy5PA-Koj9LlI1rPXYPW-fJvFq8AwGnaBBV2QMBJccFmaqeZFwRSXAFIWghhQwkaSMQAtrVGlYkpzTikVdApFKSwfIXrZNf3PDZT5vqlq3RxzSvIeZN6DzHuQ-RXkqfN06VQA8C8f0oiFkv8B1SJx_w |
CODEN | ITCSGL |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/TCSS.2024.3519300 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Social Sciences (General) |
EISSN | 2373-7476 |
EndPage | 14 |
ExternalDocumentID | 10_1109_TCSS_2024_3519300 10819285 |
Genre | orig-research |
GroupedDBID | 0R~ 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS IFIPE IPLJI JAVBF M43 OCL PQQKQ RIA RIE 4.4 AAYXX AGSQL CITATION EJD RIG |
ID | FETCH-LOGICAL-c218t-deab08943435fc6a3bb2735ee55b40ce74d9522eea5dc7f727a33111416ebf4d3 |
IEDL.DBID | RIE |
ISSN | 2329-924X |
IngestDate | Tue Jul 01 00:23:40 EDT 2025 Wed Aug 27 01:57:00 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c218t-deab08943435fc6a3bb2735ee55b40ce74d9522eea5dc7f727a33111416ebf4d3 |
ORCID | 0000-0002-1964-8208 0000-0001-9675-7494 0000-0003-3789-1479 |
PageCount | 14 |
ParticipantIDs | crossref_primary_10_1109_TCSS_2024_3519300 ieee_primary_10819285 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-00-00 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – year: 2024 text: 2024-00-00 |
PublicationDecade | 2020 |
PublicationTitle | IEEE transactions on computational social systems |
PublicationTitleAbbrev | TCSS |
PublicationYear | 2024 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0001255720 |
Score | 2.254924 |
Snippet | Multimodal emotion recognition from electroencephalogram (EEG) and eye movement signals has shown to be a promising approach to provide more discriminative... |
SourceID | crossref ieee |
SourceType | Index Database Publisher |
StartPage | 1 |
SubjectTerms | Adaptation models Brain modeling Correlation Data models Deep learning electroencephalogram (EEG) Electroencephalography Emotion recognition eye movement (EM) Feature extraction multimodal emotion recognition (MER) multimodal unsupervised domain adaptation (UDA) multisource domain adaptation (MDA) Neural networks Physiology Training |
Title | MMDA: A Multimodal and Multisource Domain Adaptation Method for Cross-Subject Emotion Recognition From EEG and Eye Movement Signals |
URI | https://ieeexplore.ieee.org/document/10819285 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8JAEN4IJy8-MeIre_CgJsVCW0q9ESgSk3IQSLg1292pIYaWYDno1T_uzG5JGhMTb9tm22w7-_jm9Q1jt6nqgAq6bUsoItVWNhmafLB8OwlQG8MDJaXk5GjSHc_dl4W3KJPVdS4MAOjgM2hRU_vyVS63ZCrDFU70XT2vxmqouZlkrYpBxfP8zs5z2baDx9lgOkUNsOO2qAqdQ0lslbOnUkxFnyWjQzbZjcKEkLy3tkXSkl-_CBr_PcwjdlCiSt430-CY7UF2wpom9ZaXy_eD35Uc0_en7DuKhv0n3uc6AXeVK-wnMmUujUGfD_OVWGa8r8Ta-Ot5pMtNc8S5fEBfaOG2Q3YcHppiQPx1F46E7dEmX_EwfNbvDT-BR7nmJi_4dPlGrM0NNh-Fs8HYKusxWBKBQGEpEIlNfO0IsVLZFU6SIPjxADwvcW0JvqsChHMAwlPSTxEZCcfBvRQxH1A8oHPG6lmewTnjPUkOOUcq1QaEREJIVwi33bOFj3hVdZvsYSepeG1oN2KtrthBTGKNSaxxKdYma5AQKh3N_7_44_4l26fHjSHlitWLzRauEVoUyY2eUj-43cyh |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZ4DLDwRpSnBwZASkmauGnYqpJSHulAi8QWOfYVVagJgnSAlT_OnZ1KFRISmxNZlpOzfZ_v8R1jpyPdAB01PUdqItXWLhmaQnBCN4vwNoYKZUTJyUm_2XsK7p7Fc5WsbnJhAMAEn0GdmsaXrws1JVMZ7nCi72qJRbaMil94Nl1rzqQiRNiY-S49N7ocdgYDvAM2gjrVofMpjW1O-8yVUzHapLvO-rN52CCS1_q0zOrq6xdF478nusHWKlzJ23YhbLIFyLdYzSbf8moDf_CzimX6fJt9J8l1-4q3uUnBnRQa-8lc20dr0ufXxUSOc97W8s167HliCk5zRLq8Q1_o4MFDlhwe23JA_HEWkITt7nsx4XF8Y8aNP4EnhWEnL_lg_EK8zTvsqRsPOz2nqsjgKIQCpaNBZi4xtiPIGqmm9LMM4Y8AECILXAVhoCMEdABSaBWOEBtJ38fTFFEfUESgv8uW8iKHPcZbilxyvtLaAwRFUqpAysBruTJExKqbNXYxk1T6Zok3UnNhcaOUxJqSWNNKrDW2Q0KY62j___4f70_YSm-YPKQPt_37A7ZKQ1mzyiFbKt-ncIRAo8yOzfL6AY-Tz-o |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MMDA%3A+A+Multimodal+and+Multisource+Domain+Adaptation+Method+for+Cross-Subject+Emotion+Recognition+From+EEG+and+Eye+Movement+Signals&rft.jtitle=IEEE+transactions+on+computational+social+systems&rft.au=Jim%C3%A9nez-Guarneros%2C+Magdiel&rft.au=Fuentes-Pineda%2C+Gibran&rft.au=Grande-Barreto%2C+Jonas&rft.date=2024&rft.issn=2329-924X&rft.eissn=2373-7476&rft.spage=1&rft.epage=14&rft_id=info:doi/10.1109%2FTCSS.2024.3519300&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSS_2024_3519300 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2329-924X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2329-924X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2329-924X&client=summon |