MTADA: A Multi-task Adversarial Domain Adaptation Network for EEG-based Cross-subject Emotion Recognition
In electroencephalogram (EEG)-based emotion recognition, the applicability of most current models is limited by inter-subject variability and emotion complexity. This study proposes a multi-task adversarial domain adaptation (MTADA) network to enhance cross-subject emotion recognition performance. T...
Saved in:
Published in | IEEE transactions on affective computing pp. 1 - 15 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In electroencephalogram (EEG)-based emotion recognition, the applicability of most current models is limited by inter-subject variability and emotion complexity. This study proposes a multi-task adversarial domain adaptation (MTADA) network to enhance cross-subject emotion recognition performance. The model first employs a domain matching strategy to select the source domain that best matches the target domain. Then, adversarial domain adaptation is used to learn the difference between source and target domains, and a fine-grained joint domain discriminator is constructed to align them by incorporating category information. At the same time, a multi-task learning mechanism is utilized to learn the intrinsic relationships between different emotions and predict multiple emotions simultaneously. We conducted comprehensive experiments on two public datasets, DEAP and FACED. On DEAP, the average accuracies for valence, arousal and dominance are 76.39%, 69.74% and 68.26%, respectively. On FACED, the average accuracies for valence and arousal are 78.90% and 77.95%. When using the subject from DEAP as the source domain to predict the subjects in FACED, the accuracies for valence and arousal are 61.07% and 60.82%. These results show that our MTADA model improves cross-subject emotion recognition and outperforms most state-of-the-art methods, which may provide new approach for EEG-based emotion brain-computer interface systems. |
---|---|
AbstractList | In electroencephalogram (EEG)-based emotion recognition, the applicability of most current models is limited by inter-subject variability and emotion complexity. This study proposes a multi-task adversarial domain adaptation (MTADA) network to enhance cross-subject emotion recognition performance. The model first employs a domain matching strategy to select the source domain that best matches the target domain. Then, adversarial domain adaptation is used to learn the difference between source and target domains, and a fine-grained joint domain discriminator is constructed to align them by incorporating category information. At the same time, a multi-task learning mechanism is utilized to learn the intrinsic relationships between different emotions and predict multiple emotions simultaneously. We conducted comprehensive experiments on two public datasets, DEAP and FACED. On DEAP, the average accuracies for valence, arousal and dominance are 76.39%, 69.74% and 68.26%, respectively. On FACED, the average accuracies for valence and arousal are 78.90% and 77.95%. When using the subject from DEAP as the source domain to predict the subjects in FACED, the accuracies for valence and arousal are 61.07% and 60.82%. These results show that our MTADA model improves cross-subject emotion recognition and outperforms most state-of-the-art methods, which may provide new approach for EEG-based emotion brain-computer interface systems. |
Author | Pan, Jiahui Feng, Weisen Ying, Zuorui Song, Xianyue Qiu, Lina Zhou, Chengju |
Author_xml | – sequence: 1 givenname: Lina surname: Qiu fullname: Qiu, Lina email: lina.qiu@scnu.edu.cn organization: School of Artificial Intelligence, South China Normal University, Foshan, China – sequence: 2 givenname: Zuorui surname: Ying fullname: Ying, Zuorui email: 854535913@qq.com organization: School of Artificial Intelligence, South China Normal University, Foshan, China – sequence: 3 givenname: Xianyue surname: Song fullname: Song, Xianyue email: 2909255488@qq.com organization: School of Artificial Intelligence, South China Normal University, Foshan, China – sequence: 4 givenname: Weisen surname: Feng fullname: Feng, Weisen email: fws0104@163.com organization: School of Artificial Intelligence, South China Normal University, Foshan, China – sequence: 5 givenname: Chengju surname: Zhou fullname: Zhou, Chengju email: cjzhou@scnu.edu.cn organization: School of Artificial Intelligence, South China Normal University, Foshan, China – sequence: 6 givenname: Jiahui surname: Pan fullname: Pan, Jiahui email: panjiahui@m.scnu.edu.cn organization: School of Artificial Intelligence, South China Normal University, Foshan, China |
BookMark | eNpNkF9LwzAUxYNMcM59AfEhX6Cz6U2XxbfStVPYFGQ-lzS9lexPM5JO8dvbbgO9L_dwOedw-d2SQWMbJOSehRPGQvm4TvI8nURhFE8gljEDcUWGTHIZQMjjwT99Q8beb8JuAGAaiSExq3UyT55oQlfHXWuCVvktTaovdF45o3Z0bvfKNN1JHVrVGtvQV2y_rdvS2jqaZYugVB4rmjrrfeCP5QZ1S7O9PXnfUdvPxvT6jlzXaudxfNkj8pFn6_Q5WL4tXtJkGWgG07ZrQ4hUiVDGQiNOuahrJjhXSvJZxTSCBsCy0lgLiIRglUAmpJaaK1nOYhiR6Nyr-48c1sXBmb1yPwULi55XceJV9LyKC68u9HAOGUT8C3T2WcQBfgH55Wm7 |
CODEN | ITACBQ |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/TAFFC.2025.3595137 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1949-3045 |
EndPage | 15 |
ExternalDocumentID | 10_1109_TAFFC_2025_3595137 11108243 |
Genre | orig-research |
GroupedDBID | 0R~ 4.4 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS HZ~ IEDLZ IFIPE IPLJI JAVBF O9- OCL PQQKQ RIA RIE 5VS AAYXX AGSQL CITATION EJD M43 RIG RNI RZB |
ID | FETCH-LOGICAL-c136t-bae32abe3b57cee647ff1744aa948d1ce3c33ebdcef732771d7e179c9c4a9b853 |
IEDL.DBID | RIE |
ISSN | 1949-3045 |
IngestDate | Wed Aug 06 19:15:15 EDT 2025 Wed Aug 13 06:23:07 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c136t-bae32abe3b57cee647ff1744aa948d1ce3c33ebdcef732771d7e179c9c4a9b853 |
PageCount | 15 |
ParticipantIDs | ieee_primary_11108243 crossref_primary_10_1109_TAFFC_2025_3595137 |
PublicationCentury | 2000 |
PublicationDate | 2025-00-00 |
PublicationDateYYYYMMDD | 2025-01-01 |
PublicationDate_xml | – year: 2025 text: 2025-00-00 |
PublicationDecade | 2020 |
PublicationTitle | IEEE transactions on affective computing |
PublicationTitleAbbrev | TAFFC |
PublicationYear | 2025 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0000333627 |
Score | 2.3533976 |
Snippet | In electroencephalogram (EEG)-based emotion recognition, the applicability of most current models is limited by inter-subject variability and emotion... |
SourceID | crossref ieee |
SourceType | Index Database Publisher |
StartPage | 1 |
SubjectTerms | Accuracy Adaptation models Affective computing Brain modeling Computational modeling Cross-subject Domain adaptation Electroencephalogram(EEG) Electroencephalography Emotion recognition Feature extraction Multi-task learning Multitasking Training |
Title | MTADA: A Multi-task Adversarial Domain Adaptation Network for EEG-based Cross-subject Emotion Recognition |
URI | https://ieeexplore.ieee.org/document/11108243 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3Na4MwFA9rT7vss2PdFznsNmLVpMbsJq2uDNrDaKE3SWKEUqZl1cv--iVRtzIY7CbRQMx7yfvI--UHwKOXB5hgKvXu57rIXHmOGMk9pIgr8gzrqMhyRs4XwWxFXtfjdQtWt1gYpZQtPlOOebRn-Vkpa5MqG3mmZt0nuAd6OnJrwFrfCRUXY70Z0w4Y47LRMkqSiQ4B_bFj4Kee4To_MD4HbCrWmCSnYNENo6kh2Tp1JRz5-euGxn-P8wyctG4ljBo9OAdHqrgApx1lA2xX8CXYzJfRNHqGEbTIW1Tx_RZaUuY9N6oIp-U73xS6ie-aQ3q4aArFofZuYRy_IGP3MjgxP4j2tTB5HBg3ZEDwrStHKosBWCXxcjJDLdsCkh4OKt1bYZ8LhcWYassZEJrnOlwhnDMSZp5UWGKsRCZVTrFPqZdRpVezZJJwJrTVvwL9oizUNYBcv2OuIiLk2mERmOWhYTIPdcQqDdJ2CJ46MaS75lKN1AYjLkut0FIjtLQV2hAMzBT_fNnO7s0f7bfg2HRv8iR3oF991Opeew6VeLAa8wUyzcAi |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1LS8QwEB5ED3rxLb7NQU-StW2y21bwUPbh-tg9yAreapKmIGJ3cbuI_hf_ir_NSdquIngUvJW0FJL5Oq_OzAdw6KYNxpmvUPs5DjUjz2nIU5dq7sg0YRgVWc7IXr_RveWXd_W7GXif9sJorW3xma6ZS_svPxmqiUmVnbimZt3jFVf1lX59wQhtfHbRQnEeeV6nPWh2aUkiQJXLGjmVQjNPSM1k3UeD0OB-mqIXzoUIeZC4SjPFmJaJ0qnPPN93E18jSFWouAhlYEghUMPPoaNR94r2sGkKx2EM1b9fteI44ckg6nSaGHR69ZppeHUNu_o3c_eNv8War84SfFQbL6pWHmuTXNbU24-ZkP_2ZJZhsXScSVQgfQVmdLYKSxUpBSl11Bo89AZRKzolEbG9xTQX40diaafHwnxspDV8Eg8ZLolRUYZA-kUpPEH_nbTb59RY9oQ0zYHS8USaTBVpF3RH5KYquBpm63D7JxvegNlsmOlNIALvhY7mMhDokkkWpoHhag8wJleml3gLjiuxx6NibEhswy0njC1IYgOSuATJFqwbkX49WUpz-5f1A5jvDnrX8fVF_2oHFsyriqzQLszmzxO9h35SLvctWgnc_zUIPgEcYSDa |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MTADA%3A+A+Multi-task+Adversarial+Domain+Adaptation+Network+for+EEG-based+Cross-subject+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+affective+computing&rft.au=Qiu%2C+Lina&rft.au=Ying%2C+Zuorui&rft.au=Song%2C+Xianyue&rft.au=Feng%2C+Weisen&rft.date=2025&rft.pub=IEEE&rft.eissn=1949-3045&rft.spage=1&rft.epage=15&rft_id=info:doi/10.1109%2FTAFFC.2025.3595137&rft.externalDocID=11108243 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1949-3045&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1949-3045&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1949-3045&client=summon |