Recognizing Emotions evoked by Movies using Multitask Learning
Understanding the emotional impact of movies has become important for affective movie analysis, ranking, and indexing. Methods for recognizing evoked emotions are usually trained on human annotated data. Concretely, viewers watch video clips and have to manually annotate the emotions they experience...
Saved in:
Published in | International Conference on Affective Computing and Intelligent Interaction and workshops pp. 1 - 8 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
28.09.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 2156-8111 |
DOI | 10.1109/ACII52823.2021.9597464 |
Cover
Loading…
Abstract | Understanding the emotional impact of movies has become important for affective movie analysis, ranking, and indexing. Methods for recognizing evoked emotions are usually trained on human annotated data. Concretely, viewers watch video clips and have to manually annotate the emotions they experienced while watching the videos. Then, the common practice is to aggregate the different annotations, by computing average scores or majority voting, and train and test models on these aggregated annotations. With this procedure a single aggregated evoked emotion annotation is obtained per each video. However, emotions experienced while watching a video are subjective: different individuals might experience different emotions. In this paper, we model the emotions evoked by videos in a different manner: instead of modeling the aggregated value we jointly model the emotions experienced by each viewer and the aggregated value using a multi-task learning approach. Concretely, we propose two deep learning architectures: a Single-Task (ST) architecture and a Multi-Task (MT) architecture. Our results show that the MT approach can more accurately model each viewer and the aggregated annotation when compared to methods that are directly trained on the aggregated annotations. Furthermore, our approach outperforms the current state-of-the-art results on the COGNIMUSE benchmark. |
---|---|
AbstractList | Understanding the emotional impact of movies has become important for affective movie analysis, ranking, and indexing. Methods for recognizing evoked emotions are usually trained on human annotated data. Concretely, viewers watch video clips and have to manually annotate the emotions they experienced while watching the videos. Then, the common practice is to aggregate the different annotations, by computing average scores or majority voting, and train and test models on these aggregated annotations. With this procedure a single aggregated evoked emotion annotation is obtained per each video. However, emotions experienced while watching a video are subjective: different individuals might experience different emotions. In this paper, we model the emotions evoked by videos in a different manner: instead of modeling the aggregated value we jointly model the emotions experienced by each viewer and the aggregated value using a multi-task learning approach. Concretely, we propose two deep learning architectures: a Single-Task (ST) architecture and a Multi-Task (MT) architecture. Our results show that the MT approach can more accurately model each viewer and the aggregated annotation when compared to methods that are directly trained on the aggregated annotations. Furthermore, our approach outperforms the current state-of-the-art results on the COGNIMUSE benchmark. |
Author | Hayat, Hassan Lapedriza, Agata Ventura, Carles |
Author_xml | – sequence: 1 givenname: Hassan surname: Hayat fullname: Hayat, Hassan email: hhassan0@uoc.edu organization: Universitat Oberta de Catalunya,Estudis d'Informatica, Multimedia i Telecomunicacio / eHealth Center,Barcelona,Spain – sequence: 2 givenname: Carles surname: Ventura fullname: Ventura, Carles email: cventuraroy@uoc.edu organization: Universitat Oberta de Catalunya,Estudis d'Informatica, Multimedia i Telecomunicacio / eHealth Center,Barcelona,Spain – sequence: 3 givenname: Agata surname: Lapedriza fullname: Lapedriza, Agata email: alapedriza@uoc.edu organization: Universitat Oberta de Catalunya,Estudis d'Informatica, Multimedia i Telecomunicacio / eHealth Center,Barcelona,Spain |
BookMark | eNotj11LwzAYRqMouM39AkH6B1rz5js3wiibFjoE0euRtm9H3JbI0g3mr9fhrg6cAw88Y3ITYkBCHoEWANQ-zcqqkswwXjDKoLDSaqHEFRmDUlJQClZdkxEDqXIDAHdkmtIXPXtJjZEj8vyObVwH_-PDOpvv4uBjSBke4wa7rDlly3j0mLJDOvflYTv4waVNVqPbhz91T257t004vXBCPhfzj_I1r99eqnJW555RPuQge6YN166hnbbCoG4a0XIuUejOomyNbTj0ouOtRgMIyihDrWwdVdp0kk_Iw_-uR8TV997v3P60utzlvyFrS0Q |
ContentType | Conference Proceeding |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1109/ACII52823.2021.9597464 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISBN | 1665400196 9781665400190 |
EISSN | 2156-8111 |
EndPage | 8 |
ExternalDocumentID | 9597464 |
Genre | orig-research |
GroupedDBID | 6IE 6IF 6IK 6IL 6IN AAJGR AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IPLJI M43 OCL RIE RIL |
ID | FETCH-LOGICAL-i203t-15f27837ab0d7948e7bb4c335e47d9e5c89b31f4d3c7e81e16868095ca0678d53 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:03:30 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i203t-15f27837ab0d7948e7bb4c335e47d9e5c89b31f4d3c7e81e16868095ca0678d53 |
PageCount | 8 |
ParticipantIDs | ieee_primary_9597464 |
PublicationCentury | 2000 |
PublicationDate | 2021-Sept.-28 |
PublicationDateYYYYMMDD | 2021-09-28 |
PublicationDate_xml | – month: 09 year: 2021 text: 2021-Sept.-28 day: 28 |
PublicationDecade | 2020 |
PublicationTitle | International Conference on Affective Computing and Intelligent Interaction and workshops |
PublicationTitleAbbrev | ACII |
PublicationYear | 2021 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0001950885 |
Score | 1.8025213 |
Snippet | Understanding the emotional impact of movies has become important for affective movie analysis, ranking, and indexing. Methods for recognizing evoked emotions... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 1 |
SubjectTerms | Affective computing Annotations Computational modeling Computer architecture Deep learning Emotion recognition Evoked emotion recognition Motion pictures Multi-modality Multi-task Learning Visualization |
Title | Recognizing Emotions evoked by Movies using Multitask Learning |
URI | https://ieeexplore.ieee.org/document/9597464 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5zJ09TN_E3OXi0XdP8aHIRZGxswkTEwW6jSV6HDFZxneD-epO221A8eCuhISXh8b2Xft_7ELpVLgkBF0mBMTwLmM1UILWCQDBJjE2ZFdQLnMdPYjhhj1M-baC7nRYGAEryGYT-sfyXb3Oz9ldlXeWzX8EO0IEr3Cqt1v4-pbQz5bUImESq-9AbjbirKKirAmMS1pN_uKiUIDJoofF2-Yo7sgjXhQ7N5ldnxv9-3xHq7OV6-HkHRMeoAcsT1Nr6NeA6fNvo_qViC23cW7hf-fesMHzmC7BYf-Fx7kByhT0Vfo4rZW66WuC6Beu8gyaD_mtvGNT-CcFbHNEiIDzzPhpJqiPrwk5CojUzlHJgiVXAjVSakoxZahKQBIiQQrqUy6Qewiynp6i5zJdwhrCmLvMyrjayljKbRKlKDROx5SkVTFtyjtp-O2bvVYuMWb0TF38PX6JDfySedhHLK9QsPtZw7bC90DfloX4DhkqjyA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1dS8MwFA1zPujT1E38Ng8-2q5pPpq8CDI2Nl2HyAZ7G81HhwxWcZvgfr1J220oPvhWQgMh4XLuTc65B4A7YZMQYyPJU4qmHtGp8LgUxmOEI6UTohl2Aud4wLoj8jSm4wq432phjDE5-cz47jN_y9eZWrmrsqZw2S8je2Df4j5FhVprd6OSG5rSUgaMAtF8bPV61NYU2NaBIfLL6T98VHIY6dRAvFlAwR6Z-aul9NX6V2_G_67wCDR2gj34soWiY1Ax8xNQ2zg2wDKA6-DhteALre1fsF04-Cyg-cxmRkP5BePMwuQCOjL8FBba3GQxg2UT1mkDjDrtYavrlQ4K3lsY4KWHaOqcNKJEBtoGHjeRlERhTA2JtDBUcSExSonGKjIcGcQ44zbpUokDMU3xKajOs7k5A1Bim3spWx1pjYmOgkQkirBQ0wQzIjU6B3W3HZP3oknGpNyJi7-Hb8FBdxj3J_3e4PkSHLrjcSSMkF-B6vJjZa4t0i_lTX7A30O2pxE |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=International+Conference+on+Affective+Computing+and+Intelligent+Interaction+and+workshops&rft.atitle=Recognizing+Emotions+evoked+by+Movies+using+Multitask+Learning&rft.au=Hayat%2C+Hassan&rft.au=Ventura%2C+Carles&rft.au=Lapedriza%2C+Agata&rft.date=2021-09-28&rft.pub=IEEE&rft.eissn=2156-8111&rft.spage=1&rft.epage=8&rft_id=info:doi/10.1109%2FACII52823.2021.9597464&rft.externalDocID=9597464 |