Fast-PADMA: Rapidly Adapting Facial Affect Model From Similar Individuals
A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be long and tedious for the end users (e.g., consumers). We address this critical issue by presenting a more user-friendly user-adaptive model to...
Saved in:
Published in | IEEE transactions on multimedia Vol. 20; no. 7; pp. 1901 - 1915 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.07.2018
|
Subjects | |
Online Access | Get full text |
ISSN | 1520-9210 1941-0077 |
DOI | 10.1109/TMM.2017.2775206 |
Cover
Loading…
Abstract | A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be long and tedious for the end users (e.g., consumers). We address this critical issue by presenting a more user-friendly user-adaptive model to make the personalized approach more practical. This paper proposes a novel user-adaptive model, which we have called fast-Personal Affect Detection with Minimal Annotation (Fast-PADMA). Fast-PADMA integrates data from multiple source subjects with a small amount of data from the target subject. Collecting this target subject data is feasible since fast-PADMA requires only one self-reported affect annotation per facial video segment. To alleviate overfitting in this context of limited individual training data, we propose an efficient bootstrapping technique, which strengthens the contribution of multiple similar source subjects. Specifically, we employ an ensemble classifier to construct pretrained weak generic classifiers from data of multiple source subjects, which is weighted according to the available data from the target user. The result is a model that does not require expensive computation, such as distribution dissimilarity calculation or model retraining. We evaluate our method with in-depth experimental evaluations on five publicly available facial datasets, with results that compare favorably with the state-of-the-art performance on classifying pain, arousal, and valence. Our findings show that fast-PADMA is effective at rapidly constructing a user-adaptive model that outperforms both its generic and user-specific counterparts. This efficient technique has the potential to significantly improve user-adaptive facial affect recognition for personal use and, therefore, enable comprehensive affect-aware applications. |
---|---|
AbstractList | A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be long and tedious for the end users (e.g., consumers). We address this critical issue by presenting a more user-friendly user-adaptive model to make the personalized approach more practical. This paper proposes a novel user-adaptive model, which we have called fast-Personal Affect Detection with Minimal Annotation (Fast-PADMA). Fast-PADMA integrates data from multiple source subjects with a small amount of data from the target subject. Collecting this target subject data is feasible since fast-PADMA requires only one self-reported affect annotation per facial video segment. To alleviate overfitting in this context of limited individual training data, we propose an efficient bootstrapping technique, which strengthens the contribution of multiple similar source subjects. Specifically, we employ an ensemble classifier to construct pretrained weak generic classifiers from data of multiple source subjects, which is weighted according to the available data from the target user. The result is a model that does not require expensive computation, such as distribution dissimilarity calculation or model retraining. We evaluate our method with in-depth experimental evaluations on five publicly available facial datasets, with results that compare favorably with the state-of-the-art performance on classifying pain, arousal, and valence. Our findings show that fast-PADMA is effective at rapidly constructing a user-adaptive model that outperforms both its generic and user-specific counterparts. This efficient technique has the potential to significantly improve user-adaptive facial affect recognition for personal use and, therefore, enable comprehensive affect-aware applications. |
Author | Leong, Hong Va Hua, Kien A. Ngai, Grace Huang, Michael Xuelin Li, Jiajia |
Author_xml | – sequence: 1 givenname: Michael Xuelin orcidid: 0000-0001-5695-2869 surname: Huang fullname: Huang, Michael Xuelin email: mhuang@mpi-inf.mpg.de organization: Max Planck Institute for Informatics, Saarbrucken, Germany – sequence: 2 givenname: Jiajia surname: Li fullname: Li, Jiajia email: lijiajia.simg@gmail.com organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong – sequence: 3 givenname: Grace orcidid: 0000-0002-2027-168X surname: Ngai fullname: Ngai, Grace email: csgngai@comp.polyu.edu.hk organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong – sequence: 4 givenname: Hong Va surname: Leong fullname: Leong, Hong Va email: cshleong@comp.polyu.edu.hk organization: Department of Computing, The Hong Kong Polytechnic University, Hong Kong – sequence: 5 givenname: Kien A. surname: Hua fullname: Hua, Kien A. email: kienhua@cs.ucf.edu organization: School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, FL, USA |
BookMark | eNp9kE1Lw0AQhhepYFu9C172D6TO7Ca7ibdQjRYaFK3nsNkPWUmTkkSh_94tLR48eJl3GOYZhmdGJm3XWkKuERaIkN1uynLBAOWCSZkwEGdkilmMEYCUk9CHWZQxhAsyG4ZPAIwTkFOyKtQwRi_5fZnf0Ve186bZ09yo3ejbD1oo7VVDc-esHmnZGdvQou-29M1vfaN6umqN__bmSzXDJTl3IezVKefkvXjYLJ-i9fPjapmvI40yHaNayNS6WlsbasoNjyFhKktrpQwmiQZhnWWCpxIRHQMQOhEsYaYGbmIJfE7E8a7uu2Horau0H9Xou3bslW8qhOogpApCqoOQ6iQkgPAH3PV-q_r9f8jNEfHh3d_1FINNLvkP0z5sPw |
CODEN | ITMUF8 |
CitedBy_id | crossref_primary_10_1109_TAFFC_2020_2973158 |
Cites_doi | 10.1109/CVPR.2016.140 10.1109/TPAMI.2006.248 10.1109/CVPR.2010.5539857 10.1080/00031305.1992.10475879 10.1109/TNN.2010.2091281 10.1109/CVPR.2013.75 10.1109/FG.2013.6553779 10.1109/TCYB.2013.2257749 10.1016/j.imavis.2011.12.003 10.1109/CVPR.2013.451 10.1145/1273496.1273521 10.1006/jcss.1997.1504 10.1109/ICCV.2015.430 10.1016/j.neucom.2015.03.020 10.1016/j.imavis.2014.02.008 10.1109/T-AFFC.2011.25 10.1109/TMM.2016.2523421 10.1109/TSMCB.2012.2200675 10.1109/CVPR.2016.602 10.1109/CVPR.2017.580 10.1145/2647868.2654916 10.1109/TPAMI.2010.155 10.1023/A:1010933404324 10.1109/TCYB.2016.2633306 10.1016/j.cviu.2015.09.015 10.1109/TAFFC.2016.2537327 10.1109/TKDE.2009.191 10.1109/CVPRW.2010.5543262 10.1109/TNNLS.2015.2424254 10.1111/j.1467-9280.2007.02024.x 10.1109/CVPRW.2016.184 10.1016/j.imavis.2009.05.007 10.1145/2020408.2020520 10.1109/TAFFC.2015.2495222 10.1109/TNNLS.2011.2178556 10.1016/j.patrec.2013.02.002 10.1109/TPAMI.2014.2366127 10.1109/ICCV.2015.463 10.1109/T-AFFC.2013.4 10.1162/089976601300014493 10.1109/MMUL.2012.26 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/TMM.2017.2775206 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1941-0077 |
EndPage | 1915 |
ExternalDocumentID | 10_1109_TMM_2017_2775206 8115237 |
Genre | orig-research |
GrantInformation_xml | – fundername: The Hong Kong Polytechnic University grantid: PolyU 5222/13E – fundername: Hong Kong Research Grant Council |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 VH1 ZY4 AAYXX CITATION |
ID | FETCH-LOGICAL-c178t-b678efbceeefb83d34052a98baad155c06efe26387111f2006c56252db03d4703 |
IEDL.DBID | RIE |
ISSN | 1520-9210 |
IngestDate | Thu Apr 24 23:02:13 EDT 2025 Tue Jul 01 00:53:26 EDT 2025 Wed Aug 27 02:50:52 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 7 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c178t-b678efbceeefb83d34052a98baad155c06efe26387111f2006c56252db03d4703 |
ORCID | 0000-0002-2027-168X 0000-0001-5695-2869 |
PageCount | 15 |
ParticipantIDs | ieee_primary_8115237 crossref_citationtrail_10_1109_TMM_2017_2775206 crossref_primary_10_1109_TMM_2017_2775206 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-July 2018-7-00 |
PublicationDateYYYYMMDD | 2018-07-01 |
PublicationDate_xml | – month: 07 year: 2018 text: 2018-July |
PublicationDecade | 2010 |
PublicationTitle | IEEE transactions on multimedia |
PublicationTitleAbbrev | TMM |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref35 ref13 ref34 ref15 ref36 ref31 wu (ref14) 0 ref30 ref33 ref11 ref32 ref10 ref2 ref1 ref39 ref17 al-stouhi (ref23) 2011; 6911 ref38 ref16 ref19 ref18 ruiz (ref12) 0 viola (ref5) 0 ref46 ref24 bernhardt (ref37) 0 altman (ref45) 1992; 46 ref26 ref25 ref20 ref42 fu (ref9) 2011; 33 ref41 ref22 ref44 ref21 ref43 ref28 ref27 ref29 ref8 ref7 ref4 ref3 ref6 ref40 |
References_xml | – ident: ref24 doi: 10.1109/CVPR.2016.140 – ident: ref8 doi: 10.1109/TPAMI.2006.248 – ident: ref22 doi: 10.1109/CVPR.2010.5539857 – volume: 46 start-page: 175 year: 1992 ident: ref45 article-title: An introduction to kernel and nearest-neighbor nonparametric regression publication-title: Amer Stat doi: 10.1080/00031305.1992.10475879 – ident: ref25 doi: 10.1109/TNN.2010.2091281 – ident: ref34 doi: 10.1109/CVPR.2013.75 – ident: ref30 doi: 10.1109/FG.2013.6553779 – ident: ref10 doi: 10.1109/TCYB.2013.2257749 – ident: ref33 doi: 10.1016/j.imavis.2011.12.003 – ident: ref3 doi: 10.1109/CVPR.2013.451 – year: 0 ident: ref5 article-title: Multiple instance boosting for object detection publication-title: Proc Adv Neural Inf Process Syst – ident: ref20 doi: 10.1145/1273496.1273521 – ident: ref21 doi: 10.1006/jcss.1997.1504 – ident: ref31 doi: 10.1109/ICCV.2015.430 – ident: ref35 doi: 10.1016/j.neucom.2015.03.020 – ident: ref6 doi: 10.1016/j.imavis.2014.02.008 – ident: ref39 doi: 10.1109/T-AFFC.2011.25 – ident: ref4 doi: 10.1109/TMM.2016.2523421 – ident: ref18 doi: 10.1109/TSMCB.2012.2200675 – volume: 6911 start-page: 60 year: 2011 ident: ref23 publication-title: Adaptive Boosting for Transfer Learning Using Dynamic Updates – start-page: 59 year: 0 ident: ref37 article-title: Detecting affect from non-stylised body motions publication-title: Affective Computing and Intelligent Interaction – ident: ref15 doi: 10.1109/CVPR.2016.602 – ident: ref46 doi: 10.1109/CVPR.2017.580 – ident: ref2 doi: 10.1145/2647868.2654916 – volume: 33 start-page: 958 year: 2011 ident: ref9 article-title: MILIS: Multiple instance learning with instance selection publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2010.155 – start-page: 1 year: 0 ident: ref14 article-title: Multi-instance hidden markov model for facial expression recognition publication-title: Proc IEEE Automat Face Gesture Recogn Workshops Int Conf – ident: ref44 doi: 10.1023/A:1010933404324 – ident: ref26 doi: 10.1109/TCYB.2016.2633306 – ident: ref13 doi: 10.1016/j.cviu.2015.09.015 – ident: ref16 doi: 10.1109/TAFFC.2016.2537327 – ident: ref19 doi: 10.1109/TKDE.2009.191 – ident: ref38 doi: 10.1109/CVPRW.2010.5543262 – ident: ref11 doi: 10.1109/TNNLS.2015.2424254 – year: 0 ident: ref12 article-title: Regularized multi-concept MIL for weakly-supervised facial behavior categorization publication-title: Proc Brit Mach Vis Conf – ident: ref43 doi: 10.1111/j.1467-9280.2007.02024.x – ident: ref29 doi: 10.1109/CVPRW.2016.184 – ident: ref7 doi: 10.1016/j.imavis.2009.05.007 – ident: ref36 doi: 10.1145/2020408.2020520 – ident: ref17 doi: 10.1109/TAFFC.2015.2495222 – ident: ref27 doi: 10.1109/TNNLS.2011.2178556 – ident: ref32 doi: 10.1016/j.patrec.2013.02.002 – ident: ref1 doi: 10.1109/TPAMI.2014.2366127 – ident: ref28 doi: 10.1109/ICCV.2015.463 – ident: ref41 doi: 10.1109/T-AFFC.2013.4 – ident: ref40 doi: 10.1162/089976601300014493 – ident: ref42 doi: 10.1109/MMUL.2012.26 |
SSID | ssj0014507 |
Score | 2.2360172 |
Snippet | A user-specific model generally performs better in facial affect recognition. Existing solutions, however, have usability issues since the annotation can be... |
SourceID | crossref ieee |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 1901 |
SubjectTerms | Adaptation models Affective computing Computational modeling Data models Face recognition facial affect Hidden Markov models Prototypes rapid modeling Training user-adaptive model |
Title | Fast-PADMA: Rapidly Adapting Facial Affect Model From Similar Individuals |
URI | https://ieeexplore.ieee.org/document/8115237 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR1NS8Mw9DE96cHpVJxf5OBFsF3tVxJvRS1OqIg62K0kTQrDuQ3XHfTX-9J2ZYqIl1LaFwjv5X3mfQCc5SygGWUaWTyT6KB43OJK5BYNeChDLmRedft8CO8G_v0wGLbgoqmF0VqXyWfaNq_lXb6aZgsTKusxNF9cj67BGjpuVa1Wc2PgB2VpNAI4Fkc_Znkl6fDeS5KYHC5qu5Ti7_CbClqZqVKqlLgNyXIzVSbJq70opJ19_ujT-N_dbsNWbVuSqDoMO9DSkw60l3MbSM3GHdhcaUK4C_1YzAvrMbpJoivyJGYjNf4gkRIzkxBNYmFi6iQq0z6ImZw2JvH79I08j95G6BWTflPRNd-DQXz7cn1n1QMWrOySssKSqKl0LlFP4pN5ykPrzRWcSSEU2hmZE-pcu8ihFCViboIPmfGXXCUdT_koK_ZhfTKd6AMgmkvmK-0L7eV-4FHpMd-ITp_mTHIZdqG3xHma1d3HzRCMcVp6IQ5PkUqpoVJaU6kL582KWdV54w_YXYP_Bq5G_eHvn49gAxezKun2GNaL94U-QdOikKflmfoCIk3JPA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR3LSsQwcFj1oB5cn_g2By-C3a19JfFW1LKrVkRX8FaSJoXFfaHdg369k7ZbVES8lNJOS5jJZN4zAMcZ82lKmUYWTyUaKC63uBKZRX0eyIALmZXdPu-CzpN3_ew_N-C0roXRWhfJZ7plbotYvhqnU-MqazNUXxyXzsGCb4pxy2qtOmbg-UVxNILYFkdLZhaUtHm7F8cmi4u2HErxdfBNCH2ZqlIIlagJ8Ww5ZS7JS2uay1b68aNT43_XuworlXZJwnI7rEFDj9ahOZvcQCpGXoflL20IN6Abibfcug8v4_CcPIhJXw3eSajExKREk0gYrzoJi8QPYmanDUj0Oh6Sx_6wj3Yx6dY1XW-b8BRd9S46VjViwUrPKMstibJKZxIlJV6Zq1zU3xzBmRRCoaaR2oHOtIM8SvFMzIz7ITUWk6Ok7SoPT4stmB-NR3obiOaSeUp7QruZ57tUuswzh6dHMya5DHagPcN5klb9x80YjEFS2CE2T5BKiaFSUlFpB07qLyZl740_YDcM_mu4CvW7vz8-gsVOL75Nbrt3N3uwhD9iZQruPsznr1N9gIpGLg-L_fUJzbrMhA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Fast-PADMA%3A+Rapidly+Adapting+Facial+Affect+Model+From+Similar+Individuals&rft.jtitle=IEEE+transactions+on+multimedia&rft.au=Huang%2C+Michael+Xuelin&rft.au=Li%2C+Jiajia&rft.au=Ngai%2C+Grace&rft.au=Leong%2C+Hong+Va&rft.date=2018-07-01&rft.issn=1520-9210&rft.eissn=1941-0077&rft.volume=20&rft.issue=7&rft.spage=1901&rft.epage=1915&rft_id=info:doi/10.1109%2FTMM.2017.2775206&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TMM_2017_2775206 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1520-9210&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1520-9210&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1520-9210&client=summon |