Adaptive information fusion network for multi‐modal personality recognition
Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made significant strides in recent years, the challenge of heterogeneity between modalities during feature fusion still needs to be solved. This paper intro...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 3 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made significant strides in recent years, the challenge of heterogeneity between modalities during feature fusion still needs to be solved. This paper introduces an adaptive multi‐modal information fusion network (AMIF‐Net) capable of concurrently processing video, audio, and text data. First, utilizing the AMIF‐Net encoder, we process the extracted audio and video features separately, effectively capturing long‐term data relationships. Then, adding adaptive elements in the fusion network can alleviate the problem of heterogeneity between modes. Lastly, we concatenate audio‐video and text features into a regression network to obtain Big Five personality trait scores. Furthermore, we introduce a novel loss function to address the problem of training inaccuracies, taking advantage of its unique property of exhibiting a peak at the critical mean. Our tests on the ChaLearn First Impressions V2 multi‐modal dataset show partial performance surpassing state‐of‐the‐art networks.
This paper proposes an adaptive multimodal information fusion network for personality recognition. The design features of each encoder are optimized and merged for downstream tasks. We greatly enhance the functionality of the Transformer component by integrating adaptive attention and automatic learning of cross‐modal associations. This not only solves the problem of outliers and gradient vanishing during model training, but also has practical significance for practical applications. |
---|---|
AbstractList | Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made significant strides in recent years, the challenge of heterogeneity between modalities during feature fusion still needs to be solved. This paper introduces an adaptive multi‐modal information fusion network (AMIF‐Net) capable of concurrently processing video, audio, and text data. First, utilizing the AMIF‐Net encoder, we process the extracted audio and video features separately, effectively capturing long‐term data relationships. Then, adding adaptive elements in the fusion network can alleviate the problem of heterogeneity between modes. Lastly, we concatenate audio‐video and text features into a regression network to obtain Big Five personality trait scores. Furthermore, we introduce a novel loss function to address the problem of training inaccuracies, taking advantage of its unique property of exhibiting a peak at the critical mean. Our tests on the ChaLearn First Impressions V2 multi‐modal dataset show partial performance surpassing state‐of‐the‐art networks. Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made significant strides in recent years, the challenge of heterogeneity between modalities during feature fusion still needs to be solved. This paper introduces an adaptive multi‐modal information fusion network (AMIF‐Net) capable of concurrently processing video, audio, and text data. First, utilizing the AMIF‐Net encoder, we process the extracted audio and video features separately, effectively capturing long‐term data relationships. Then, adding adaptive elements in the fusion network can alleviate the problem of heterogeneity between modes. Lastly, we concatenate audio‐video and text features into a regression network to obtain Big Five personality trait scores. Furthermore, we introduce a novel loss function to address the problem of training inaccuracies, taking advantage of its unique property of exhibiting a peak at the critical mean. Our tests on the ChaLearn First Impressions V2 multi‐modal dataset show partial performance surpassing state‐of‐the‐art networks. This paper proposes an adaptive multimodal information fusion network for personality recognition. The design features of each encoder are optimized and merged for downstream tasks. We greatly enhance the functionality of the Transformer component by integrating adaptive attention and automatic learning of cross‐modal associations. This not only solves the problem of outliers and gradient vanishing during model training, but also has practical significance for practical applications. |
Author | Bao, Yongtang Liu, Xiang Li, Haojie Qi, Yue Liu, Ruijun |
Author_xml | – sequence: 1 givenname: Yongtang orcidid: 0000-0002-1010-7229 surname: Bao fullname: Bao, Yongtang organization: Shandong University of Science and Technology – sequence: 2 givenname: Xiang orcidid: 0009-0004-8850-3864 surname: Liu fullname: Liu, Xiang organization: Shandong University of Science and Technology – sequence: 3 givenname: Yue surname: Qi fullname: Qi, Yue organization: Beihang University Qingdao Research Institute – sequence: 4 givenname: Ruijun surname: Liu fullname: Liu, Ruijun email: liuruijun@buaa.edu.cn organization: Beihang University – sequence: 5 givenname: Haojie surname: Li fullname: Li, Haojie email: hjli@sdust.edu.cn organization: Shandong University of Science and Technology |
BookMark | eNp1kM1Kw0AUhQepYFsFHyHgxk3q_CSTybIU_6Dipoi7YTKdyNRkJs5MWrrzEXxGn8SkEReiq3Phfudyz5mAkbFGAXCO4AxBiK-k2M4wpuwIjFGa0DjB2fPoZ6boBEy833QkxQiOwcN8LZqgtyrSprSuFkFbE5Wt78WosLPuNeoWUd1WQX--f9R2LaqoUc5bIyod9pFT0r4Y3RtPwXEpKq_OvnUKVjfXq8VdvHy8vV_Ml7HEOWFxAjNMUEoIlayEeZIQRAtM1piRrExTgmROaQ5VWkBWQIpFonIKcSaZRCllZAouhrONs2-t8oFvbOu6dzwnMEMUZjkjHTUbKOms906VXOpwyBec0BVHkPeN8a4x3jfWGS5_GRqna-H2f6HxgO50pfb_cnwxfzrwXxs8fBs |
CitedBy_id | crossref_primary_10_1007_s00371_025_03841_9 crossref_primary_10_1007_s00371_025_03840_w |
Cites_doi | 10.1109/ACII.2019.8925456 10.1109/TAFFC.2024.3363710 10.1007/s11633-017-1085-8 10.1109/TAFFC.2017.2762299 10.1109/TAFFC.2021.3064601 10.1109/CVPR42600.2020.00877 10.1002/cav.2163 10.1007/978-3-319-49409-8_32 10.1007/978-3-030-58548-8_13 10.1109/ICC.2018.8422105 10.1109/TAFFC.2020.2970712 10.1002/cav.2201 10.1007/978-3-319-49409-8_28 10.1109/TAFFC.2020.2973984 10.1109/ICASSP49357.2023.10096637 10.1109/TAFFC.2019.2930058 10.1109/TPAMI.2023.3275156 10.1109/TIP.2022.3152049 10.1007/s11263-020-01309-y 10.1037/h0040291 10.1016/j.ipm.2023.103422 10.1145/3591106.3592243 10.1016/j.eij.2024.100439 10.1002/cav.2090 |
ContentType | Journal Article |
Copyright | 2024 John Wiley & Sons Ltd. 2024 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2024 John Wiley & Sons Ltd. – notice: 2024 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2268 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2268 CAV2268 |
Genre | article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China funderid: 62072020 – fundername: Beijing Natural Science Foundation funderid: L222052 – fundername: Shandong University of Science and Technology funderid: BJ20231201 – fundername: National Science and Technology Major Project funderid: 2022ZD0119502 – fundername: Taishan Scholar Program of Shandong Province funderid: tstp20221128 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OB 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEIGN AEIMD AENEX AEUYR AFBPY AFFPM AFGKR AFWVQ AFZJQ AGHNM AGQPQ AGXDD AGYGG AHBTC AIDQK AIDYY AITYG AIURR AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAHHS AAYXX ACCFJ ADZOD AEEZP AEQDE AIWBW AJBDE CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2938-4072315336c8f0944316b23d2837f5531c96690e5b08b062a4e96027c8c15683 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Sat Jul 26 03:40:57 EDT 2025 Thu Apr 24 22:51:09 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Wed Aug 20 07:26:33 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2938-4072315336c8f0944316b23d2837f5531c96690e5b08b062a4e96027c8c15683 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0009-0004-8850-3864 0000-0002-1010-7229 |
PQID | 3071607983 |
PQPubID | 2034909 |
PageCount | 14 |
ParticipantIDs | proquest_journals_3071607983 crossref_citationtrail_10_1002_cav_2268 crossref_primary_10_1002_cav_2268 wiley_primary_10_1002_cav_2268_CAV2268 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | May/June 2024 |
PublicationDateYYYYMMDD | 2024-05-01 |
PublicationDate_xml | – month: 05 year: 2024 text: May/June 2024 |
PublicationDecade | 2020 |
PublicationPlace | Chichester |
PublicationPlace_xml | – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2024 |
Publisher | Wiley Subscription Services, Inc |
Publisher_xml | – name: Wiley Subscription Services, Inc |
References | 2023; 14 1963; 66 2023; 11 2023; 34 2023; 45 2023 2022 2017; 14 2019; 319 2020 2019; 13 2019 2018 2020; 128 2016 2020; 13 2022; 31 2024 2022; 33 2024; 35 2024; 25 2023; 60 2017; 9 e_1_2_9_30_1 e_1_2_9_31_1 e_1_2_9_11_1 e_1_2_9_10_1 e_1_2_9_13_1 e_1_2_9_12_1 Tiwari J (e_1_2_9_16_1) 2023; 11 e_1_2_9_15_1 e_1_2_9_14_1 e_1_2_9_17_1 e_1_2_9_19_1 e_1_2_9_18_1 e_1_2_9_20_1 e_1_2_9_22_1 e_1_2_9_21_1 e_1_2_9_24_1 e_1_2_9_23_1 e_1_2_9_8_1 e_1_2_9_7_1 e_1_2_9_6_1 e_1_2_9_5_1 e_1_2_9_4_1 e_1_2_9_3_1 e_1_2_9_2_1 Hayat H (e_1_2_9_9_1) 2019; 319 e_1_2_9_26_1 e_1_2_9_25_1 e_1_2_9_28_1 e_1_2_9_27_1 e_1_2_9_29_1 |
References_xml | – volume: 34 issue: 3‐4 year: 2023 article-title: RAIF: a deep learning‐based architecture for multi‐modal aesthetic biometric system publication-title: Comput Anim Virtual Worlds – volume: 33 issue: 3‐4 year: 2022 article-title: SCANET: improving multimodal representation and fusion with sparse‐and cross‐attention for multimodal sentiment analysis publication-title: Comput Anim Virtual Worlds – volume: 45 start-page: 12113 issue: 10 year: 2023 end-page: 12132 article-title: Multimodal learning with transformers: a survey publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 14 start-page: 386 issue: 4 year: 2017 end-page: 395 article-title: Physiognomy: personality traits prediction by learning publication-title: Int J Autom Comput – start-page: 1 year: 2023 end-page: 5 – year: 2024 – volume: 13 start-page: 829 issue: 2 year: 2020 end-page: 844 article-title: Spectral representation of behaviour primitives for depression analysis publication-title: IEEE Trans Affect Comput – volume: 13 start-page: 894 issue: 2 year: 2020 end-page: 911 article-title: Modeling, recognizing, and explaining apparent personality from videos publication-title: IEEE Trans Affect Comput – volume: 128 start-page: 2763 year: 2020 end-page: 2780 article-title: Cr‐net: a deep classification‐regression network for multimodal apparent personality analysis publication-title: Int J Comput Vis – year: 2018 – start-page: 1 year: 2018 end-page: 6 – start-page: 349 year: 2016 end-page: 358 – volume: 14 start-page: 178 issue: 1 year: 2023 end-page: 195 article-title: Self‐supervised learning of person‐specific facial dynamics for automatic personality recognition publication-title: IEEE Trans Affect Comput – start-page: 8746 year: 2020 end-page: 8755 – volume: 66 start-page: 574 issue: 6 year: 1963 article-title: Toward an adequate taxonomy of personality attributes: replicated factor structure in peer nomination personality ratings publication-title: J Abnorm Soc Psychol – volume: 11 start-page: 578 issue: 4 year: 2023 article-title: Personality prediction from five‐factor facial traits using deep learning publication-title: J Integr Sci Technol – volume: 25 year: 2024 article-title: Combining machine learning algorithms for personality trait prediction publication-title: Egypt Inform J – volume: 31 start-page: 2162 year: 2022 end-page: 2174 article-title: Personality assessment based on multimodal attention network learning with category‐based mean square error publication-title: IEEE Trans Image Process – year: 2022 – start-page: 1 year: 2019 end-page: 7 – year: 2020 – start-page: 400 year: 2016 end-page: 418 – start-page: 214 year: 2020 end-page: 229 – volume: 9 start-page: 303 issue: 3 year: 2017 end-page: 315 article-title: Deep bimodal regression of apparent personality traits from short video sequences publication-title: IEEE Trans Affect Comput – volume: 35 issue: 1 year: 2024 article-title: TMSDNet: transformer with multi‐scale dense network for single and multi‐view 3D reconstruction publication-title: Comput Anim Virtual Worlds – volume: 60 issue: 5 year: 2023 article-title: DesPrompt: personality‐descriptive prompt tuning for few‐shot personality recognition publication-title: Inf Process Manag – start-page: 1 year: 2024 end-page: 18 article-title: An open‐source benchmark of deep learning models for audio‐visual apparent and self‐reported personality recognition publication-title: IEEE Trans Affect Comput – start-page: 243 year: 2023 end-page: 252 – volume: 13 start-page: 75 issue: 1 year: 2019 end-page: 95 article-title: First impressions: a survey on vision‐based apparent personality trait analysis publication-title: IEEE Trans Affect Comput – volume: 319 start-page: 135 year: 2019 end-page: 144 article-title: On the use of interpretable CNN for personality trait recognition from audio publication-title: Ccia – ident: e_1_2_9_28_1 doi: 10.1109/ACII.2019.8925456 – ident: e_1_2_9_6_1 doi: 10.1109/TAFFC.2024.3363710 – ident: e_1_2_9_7_1 doi: 10.1007/s11633-017-1085-8 – ident: e_1_2_9_12_1 doi: 10.1109/TAFFC.2017.2762299 – ident: e_1_2_9_30_1 doi: 10.1109/TAFFC.2021.3064601 – ident: e_1_2_9_3_1 – ident: e_1_2_9_23_1 doi: 10.1109/CVPR42600.2020.00877 – ident: e_1_2_9_20_1 – ident: e_1_2_9_18_1 doi: 10.1002/cav.2163 – volume: 319 start-page: 135 year: 2019 ident: e_1_2_9_9_1 article-title: On the use of interpretable CNN for personality trait recognition from audio publication-title: Ccia – ident: e_1_2_9_27_1 doi: 10.1007/978-3-319-49409-8_32 – ident: e_1_2_9_13_1 – ident: e_1_2_9_24_1 doi: 10.1007/978-3-030-58548-8_13 – ident: e_1_2_9_10_1 doi: 10.1109/ICC.2018.8422105 – ident: e_1_2_9_21_1 – ident: e_1_2_9_29_1 doi: 10.1109/TAFFC.2020.2970712 – ident: e_1_2_9_22_1 doi: 10.1002/cav.2201 – ident: e_1_2_9_26_1 doi: 10.1007/978-3-319-49409-8_28 – ident: e_1_2_9_31_1 doi: 10.1109/TAFFC.2020.2973984 – ident: e_1_2_9_14_1 doi: 10.1109/ICASSP49357.2023.10096637 – ident: e_1_2_9_8_1 doi: 10.1109/TAFFC.2019.2930058 – ident: e_1_2_9_25_1 doi: 10.1109/TPAMI.2023.3275156 – ident: e_1_2_9_15_1 doi: 10.1109/TIP.2022.3152049 – ident: e_1_2_9_11_1 doi: 10.1007/s11263-020-01309-y – ident: e_1_2_9_4_1 doi: 10.1037/h0040291 – ident: e_1_2_9_2_1 doi: 10.1016/j.ipm.2023.103422 – volume: 11 start-page: 578 issue: 4 year: 2023 ident: e_1_2_9_16_1 article-title: Personality prediction from five‐factor facial traits using deep learning publication-title: J Integr Sci Technol – ident: e_1_2_9_5_1 doi: 10.1145/3591106.3592243 – ident: e_1_2_9_19_1 doi: 10.1016/j.eij.2024.100439 – ident: e_1_2_9_17_1 doi: 10.1002/cav.2090 |
SSID | ssj0026210 |
Score | 2.380704 |
Snippet | Personality recognition is of great significance in deepening the understanding of social relations. While personality recognition methods have made... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
SubjectTerms | adaptation Audio data Data integration encoder Heterogeneity multi‐modal data Personality personality recognition Recognition |
Title | Adaptive information fusion network for multi‐modal personality recognition |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2268 https://www.proquest.com/docview/3071607983 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3LSsNAFB1EN7rwLVZriSC6SptMXpNlqJYi1oXWUnARMi8QNS2mceHKT_Ab_RLnZpJURUFcBSYzkNw7d-bc4dwzCB0F1KYB8W3Tl5yaLubcJB5LTCmw5IQ7oWND7fDg0u_fuOdjb1yyKqEWRutD1AduEBnFeg0BntCsMxcNZclzW2EHqPMFqhbgoataOQr7WAsReK5vQpZQ6c5auFMN_LoTzeHlZ5Ba7DK9NXRbfZ8ml9y38xlts5dv0o3_-4F1tFqCTyPSs2UDLYh0E62M7rJct2ZbaBDxZApLoFFKqoLjDJnDoZqRas64oV4YBRPx_fXtccLV4Okc0xs1J2mSbqNh72zY7ZvllQsmU_s-MUEuzQEI6DMiVeYHhfIUOxw0cqSn4pWp9Ci0hEctQi0fJ65QKRAOGGEqESTODlpMJ6nYRYZyNgkDO2FUqhSUO9QKhQIPMuGh4FSKBjqprB-zUo4cbsV4iLWQMo6VfWKwTwMd1j2nWoLjhz7NyoFxGYRZrJYvkM8LidNAx4Unfh0fd6MRPPf-2nEfLWMFbzT1sYkWZ0-5OFDwZEZbaCk6HVxct4oJ-QEaCuR8 |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NS8NAEB1qPagHv8Vq1Qiip7Tppkk3eCpFqdr2ILX0IITsF4iaFtt68ORP8Df6S9zJJq2KgngKbHYh2dmZeW-ZfQtwVGMVVqN-xfaVYHaVCGFTj0e2kkQJKtzAreDZ4XbHb95UL_tePwen2VkYow8x3XBDz0jiNTo4bkiXZ6qhPHouafBA52AeL_RO-NT1VDuK-MRIEXhV30aekCnPOqScjfyai2YA8zNMTfLM-QrcZl9oykvuS5MxK_GXb-KN__yFVVhO8adVNwtmDXIyXoel3t1oYlpHG9Cui2iIUdBKVVXRdpaa4L6aFZuycUu_sJJixPfXt8eB0IOHM1hvTcuSBvEmdM_Puo2mnd66YHOd-qmNimkuokCfU6XJH56VZ8QVKJOjPO2yXDOkwJEecyhzfBJVpWZBpMYp11yQuluQjwex3AZL25sGtUrEmdIsVLjMCaTGDyoSgRRMyQKcZNMf8lSRHC_GeAiNljIJ9fyEOD8FOJz2HBoVjh_6FDMLhqkfjkIdwVBBL6BuAY4TU_w6PmzUe_jc-WvHA1hodtutsHXRudqFRaLRjqmELEJ-_DSRexqtjNl-sio_ACaq5wM |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnZ1bS8MwFMcPOkH0wbs4rxVEn7q16WXp43AObxOROQQfSnMDUbvhNh988iP4Gf0k5jTtpqIgPhXSBNqcnOR_wskvAHs15rIaDV07VILZPhHCpgFPbCWJElR4kefi2eHWRXh87Z_eBDd5ViWehTF8iNGGG3pGNl-jg_eEqo6hoTx5rmjtQCdhyg8diiO6cTVCR5GQGBJB4Ic2hgkFeNYh1aLl16VorC8_q9RsmWnOw23xgSa75L4yHLAKf_nGbvzfHyzAXK4-rboZLoswIdMlmO3c9YemtL8MrbpIejgHWjlTFS1nqSHuqlmpSRq39AsrS0V8f3177ArduDcW9dYoKambrkC7edQ-PLbzOxdsrhd-aiMvzUMNGHKqdOiHJ-UZ8QRCclSgHZbr-ChyZMAcypyQJL7UMRCpccp1JEi9VSil3VSugaWtTaOam3CmdAwqPOZEUqsHlYhICqZkGQ6K3o95ziPHazEeYkNSJrHunxj7pwy7o5o9w-D4oc5mYcA498J-rOcv5OdF1CvDfmaJX9vHh_UOPtf_WnEHpi8bzfj85OJsA2aIljomDXITSoOnodzSUmXAtrMx-QGoIeW7 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adaptive+information+fusion+network+for+multi%E2%80%90modal+personality+recognition&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Bao%2C+Yongtang&rft.au=Liu%2C+Xiang&rft.au=Qi%2C+Yue&rft.au=Liu%2C+Ruijun&rft.date=2024-05-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=3&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.2268&rft.externalDBID=10.1002%252Fcav.2268&rft.externalDocID=CAV2268 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |