Benchmarking Micro-Action Recognition: Dataset, Methods, and Applications
Micro-action is an imperceptible non-verbal behaviour characterised by low-intensity movement. It offers insights into the feelings and intentions of individuals and is important for human-oriented applications such as emotion recognition and psychological assessment. However, the identification, di...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 34; no. 7; pp. 6238 - 6252 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1051-8215 1558-2205 |
DOI | 10.1109/TCSVT.2024.3358415 |
Cover
Loading…
Abstract | Micro-action is an imperceptible non-verbal behaviour characterised by low-intensity movement. It offers insights into the feelings and intentions of individuals and is important for human-oriented applications such as emotion recognition and psychological assessment. However, the identification, differentiation, and understanding of micro-actions pose challenges due to the imperceptible and inaccessible nature of these subtle human behaviors in everyday life. In this study, we innovatively collect a new micro-action dataset designated as Micro-action-52 (MA-52), and propose a benchmark named micro-action network (MANet) for micro-action recognition (MAR) task. Uniquely, MA-52 provides the whole-body perspective including gestures, upper- and lower-limb movements, attempting to reveal comprehensive micro-action cues. In detail, MA-52 contains 52 micro-action categories along with seven body part labels, and encompasses a full array of realistic and natural micro-actions, accounting for 205 participants and 22,422 video instances collated from the psychological interviews. Based on the proposed dataset, we assess MANet and other nine prevalent action recognition methods. MANet incorporates squeeze-and-excitation (SE) and temporal shift module (TSM) into the ResNet architecture for modeling the spatiotemporal characteristics of micro-actions. Then a joint-embedding loss is designed for semantic matching between video and action labels; the loss is used to better distinguish between visually similar yet distinct micro-action categories. The extended application in emotion recognition has demonstrated one of the important values of our proposed dataset and method. In the future, further exploration of human behaviour, emotion, and psychological assessment will be conducted in depth. The dataset and source code are released at https://github.com/VUT-HFUT/Micro-Action . |
---|---|
AbstractList | Micro-action is an imperceptible non-verbal behaviour characterised by low-intensity movement. It offers insights into the feelings and intentions of individuals and is important for human-oriented applications such as emotion recognition and psychological assessment. However, the identification, differentiation, and understanding of micro-actions pose challenges due to the imperceptible and inaccessible nature of these subtle human behaviors in everyday life. In this study, we innovatively collect a new micro-action dataset designated as Micro-action-52 (MA-52), and propose a benchmark named micro-action network (MANet) for micro-action recognition (MAR) task. Uniquely, MA-52 provides the whole-body perspective including gestures, upper- and lower-limb movements, attempting to reveal comprehensive micro-action cues. In detail, MA-52 contains 52 micro-action categories along with seven body part labels, and encompasses a full array of realistic and natural micro-actions, accounting for 205 participants and 22,422 video instances collated from the psychological interviews. Based on the proposed dataset, we assess MANet and other nine prevalent action recognition methods. MANet incorporates squeeze-and-excitation (SE) and temporal shift module (TSM) into the ResNet architecture for modeling the spatiotemporal characteristics of micro-actions. Then a joint-embedding loss is designed for semantic matching between video and action labels; the loss is used to better distinguish between visually similar yet distinct micro-action categories. The extended application in emotion recognition has demonstrated one of the important values of our proposed dataset and method. In the future, further exploration of human behaviour, emotion, and psychological assessment will be conducted in depth. The dataset and source code are released at https://github.com/VUT-HFUT/Micro-Action . |
Author | Li, Kun Zhang, Yan Wang, Meng Hu, Bin Guo, Dan |
Author_xml | – sequence: 1 givenname: Dan orcidid: 0000-0003-2594-254X surname: Guo fullname: Guo, Dan email: guodan@hfut.edu.cn organization: Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, and the School of Computer Science and Information Engineering, Hefei University of Technology (HFUT), Hefei, China – sequence: 2 givenname: Kun orcidid: 0000-0001-5083-2145 surname: Li fullname: Li, Kun email: kunli.hfut@gmail.com organization: School of Computer Science and Information Engineering, Hefei University of Technology (HFUT), Hefei, China – sequence: 3 givenname: Bin orcidid: 0000-0003-3514-5413 surname: Hu fullname: Hu, Bin email: bh@lzu.edu.cn organization: Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China – sequence: 4 givenname: Yan surname: Zhang fullname: Zhang, Yan email: yanzhang.hfut@gmail.com organization: School of Computer Science and Information Engineering, Hefei University of Technology (HFUT), Hefei, China – sequence: 5 givenname: Meng orcidid: 0000-0002-3094-7735 surname: Wang fullname: Wang, Meng email: eric.mengwang@gmail.com organization: Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, and the School of Computer Science and Information Engineering, Hefei University of Technology (HFUT), Hefei, China |
BookMark | eNp9kMtOwkAUhicGEwF9AeOiiVuKZ65t3SFeE4iJotvJMD2FQZxipyx8e1tgYVy4Omfxf-fy9UjHlx4JOacwpBSyq9n49X02ZMDEkHOZCiqPSJdKmcaMgew0PUgap4zKE9ILYQVARSqSLnm6QW-Xn6b6cH4RTZ2tynhka1f66AVtufCu7a-jW1ObgPUgmmK9LPMwiIzPo9Fms3bWtJFwSo4Lsw54dqh98nZ_Nxs_xpPnh6fxaBJblqk6pixjjKVGzDMFucEsV6xQPC9YatHmwBGETJTgmcAiKTKDHOw8VUAVJhwU75PL_dxNVX5tMdR6VW4r36zUHBIpmEqANal0n2oeCqHCQltX7w6tK-PWmoJuxemdON2K0wdxDcr-oJvKNYa-_4cu9pBDxF-AoAISxX8AlCB6wQ |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_patcog_2025_111402 crossref_primary_10_1145_3700878 crossref_primary_10_1145_3723009 crossref_primary_10_1016_j_cviu_2024_104108 crossref_primary_10_1145_3721981 crossref_primary_10_1145_3663572 crossref_primary_10_1145_3702325 crossref_primary_10_1145_3700596 crossref_primary_10_1109_JIOT_2024_3435371 crossref_primary_10_1145_3655025 crossref_primary_10_1016_j_nanoen_2024_110186 crossref_primary_10_1145_3712602 crossref_primary_10_3390_electronics13163106 crossref_primary_10_1016_j_engappai_2025_110482 crossref_primary_10_1145_3708348 crossref_primary_10_3390_jmse12081383 crossref_primary_10_1016_j_inffus_2024_102898 crossref_primary_10_1109_LSP_2024_3524099 crossref_primary_10_1007_s00530_024_01340_w |
Cites_doi | 10.1109/TCSVT.2021.3137023 10.1109/ICCV.2015.510 10.1109/TAFFC.2022.3205170 10.1007/s11432-022-3783-3 10.1109/TCSVT.2021.3077512 10.1109/ICCV.2019.01024 10.1109/TIP.2022.3217368 10.1109/ICCV48922.2021.00986 10.1109/TIP.2019.2946102 10.1126/science.1224313 10.1109/CVPR.2014.223 10.1109/ICCV48922.2021.01310 10.1109/CVPR.2016.288 10.1109/ICCV.2019.00718 10.1109/TAFFC.2018.2874986 10.1109/CVPR46437.2021.01049 10.24963/ijcai.2019/105 10.24963/ijcai.2021/178 10.1109/ICCV.2011.6126543 10.1145/3560905.3567763 10.1109/TCSVT.2021.3100842 10.1109/ICIP.2019.8803603 10.1109/CVPR.2017.53 10.1109/ICCV.2019.00630 10.1609/aaai.v32i1.12235 10.48550/arXiv.2102.05095 10.1109/TAFFC.2022.3213509 10.1609/aaai.v35i3.16285 10.1609/aaai.v34i01.5364 10.1609/aaai.v34i03.5646 10.1109/TELFOR.2015.7377568 10.1109/CVPR42600.2020.00269 10.1109/CVPR52688.2022.01968 10.1007/s11263-023-01761-6 10.1109/TIP.2018.2883743 10.1007/978-3-031-19772-7_23 10.1016/j.patcog.2023.109453 10.3115/v1/D14-1162 10.1109/CVPR.2017.502 10.1109/TAFFC.2020.3031841 10.1109/ICCV.2019.00876 10.1109/TCSVT.2021.3082635 10.1109/CVPR46437.2021.00896 10.1109/CVPR52688.2022.00320 10.1016/j.inffus.2022.03.009 10.1609/aaai.v34i07.6872 10.1109/FG.2019.8756513 10.1109/FG47880.2020.00041 10.1109/TPAMI.2018.2868668 10.1007/978-3-030-01231-1_32 10.1109/TMM.2022.3141616 10.1109/CVPR.2018.00745 10.1109/CVPR.2012.6247801 10.1109/ICME.2019.00182 10.1109/ICASSP40776.2020.9053928 10.1145/3503161.3548363 10.1109/CVPR.2016.90 10.1609/aaai.v35i15.17625 10.1007/s11263-019-01215-y 10.1016/j.patcog.2021.108282 10.1109/TCSS.2022.3223251 10.1109/WACV48630.2021.00301 10.1109/LSP.2021.3116513 10.1109/CVPR.2015.7298698 10.48550/ARXIV.1706.03762 10.48550/ARXIV.1212.0402 10.1109/WACV.2016.7477679 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2024.3358415 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Psychology |
EISSN | 1558-2205 |
EndPage | 6252 |
ExternalDocumentID | 10_1109_TCSVT_2024_3358415 10414076 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62272144; 62020106007; 72188101; U20A20183 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2022YFB4500600 funderid: 10.13039/501100012166 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c296t-1292228a4b960dae9d62f63df28cecd03e045764394ef7f9ae30cb86016e73063 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 10:23:12 EDT 2025 Tue Jul 01 00:41:25 EDT 2025 Thu Apr 24 23:04:16 EDT 2025 Wed Aug 27 02:02:12 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 7 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c296t-1292228a4b960dae9d62f63df28cecd03e045764394ef7f9ae30cb86016e73063 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-5083-2145 0000-0002-3094-7735 0000-0003-3514-5413 0000-0003-2594-254X |
PQID | 3075426702 |
PQPubID | 85433 |
PageCount | 15 |
ParticipantIDs | proquest_journals_3075426702 ieee_primary_10414076 crossref_citationtrail_10_1109_TCSVT_2024_3358415 crossref_primary_10_1109_TCSVT_2024_3358415 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-07-01 |
PublicationDateYYYYMMDD | 2024-07-01 |
PublicationDate_xml | – month: 07 year: 2024 text: 2024-07-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2024 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref18 ref51 ref50 ref46 ref47 ref42 ref41 ref44 Van der Maaten (ref63) 2008; 9 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 Damen (ref37) ref40 ref35 ref34 ref36 ref31 Xu (ref48) 2023 ref30 ref74 ref33 ref32 ref2 ref39 ref38 Kay (ref19) 2017 ref71 Derogatis (ref1) 2004; 1 ref70 ref73 ref72 ref24 ref68 ref23 ref26 ref25 ref69 ref20 ref64 ref22 ref66 ref21 ref65 ref28 ref27 ref29 Li (ref45) 2022 King (ref67) 2009; 10 ref60 ref62 ref61 |
References_xml | – ident: ref29 doi: 10.1109/TCSVT.2021.3137023 – ident: ref41 doi: 10.1109/ICCV.2015.510 – start-page: 720 volume-title: Proc. Eur. Conf. Comput. Vis. ident: ref37 article-title: Scaling egocentric vision: The epic-kitchens dataset – ident: ref73 doi: 10.1109/TAFFC.2022.3205170 – ident: ref30 doi: 10.1007/s11432-022-3783-3 – ident: ref22 doi: 10.1109/TCSVT.2021.3077512 – ident: ref65 doi: 10.1109/ICCV.2019.01024 – ident: ref23 doi: 10.1109/TIP.2022.3217368 – ident: ref49 doi: 10.1109/ICCV48922.2021.00986 – ident: ref4 doi: 10.1109/TIP.2019.2946102 – ident: ref15 doi: 10.1126/science.1224313 – ident: ref20 doi: 10.1109/CVPR.2014.223 – ident: ref61 doi: 10.1109/ICCV48922.2021.01310 – ident: ref6 doi: 10.1109/CVPR.2016.288 – ident: ref39 doi: 10.1109/ICCV.2019.00718 – ident: ref13 doi: 10.1109/TAFFC.2018.2874986 – ident: ref17 doi: 10.1109/CVPR46437.2021.01049 – ident: ref25 doi: 10.24963/ijcai.2019/105 – ident: ref5 doi: 10.24963/ijcai.2021/178 – ident: ref26 doi: 10.1109/ICCV.2011.6126543 – ident: ref52 doi: 10.1145/3560905.3567763 – volume: 1 start-page: 4 year: 2004 ident: ref1 article-title: SCL 90 publication-title: GROUP – ident: ref21 doi: 10.1109/TCSVT.2021.3100842 – ident: ref70 doi: 10.1109/ICIP.2019.8803603 – ident: ref9 doi: 10.1109/CVPR.2017.53 – ident: ref42 doi: 10.1109/ICCV.2019.00630 – ident: ref14 doi: 10.1609/aaai.v32i1.12235 – volume: 9 start-page: 2579 issue: 11 year: 2008 ident: ref63 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref44 doi: 10.48550/arXiv.2102.05095 – ident: ref55 doi: 10.1109/TAFFC.2022.3213509 – ident: ref28 doi: 10.1609/aaai.v35i3.16285 – ident: ref69 doi: 10.1609/aaai.v34i01.5364 – ident: ref53 doi: 10.1609/aaai.v34i03.5646 – ident: ref57 doi: 10.1109/TELFOR.2015.7377568 – ident: ref11 doi: 10.1109/CVPR42600.2020.00269 – ident: ref12 doi: 10.1109/CVPR52688.2022.01968 – year: 2017 ident: ref19 article-title: The kinetics human action video dataset publication-title: arXiv:1705.06950 – ident: ref33 doi: 10.1007/s11263-023-01761-6 – ident: ref10 doi: 10.1109/TIP.2018.2883743 – ident: ref47 doi: 10.1007/978-3-031-19772-7_23 – ident: ref51 doi: 10.1016/j.patcog.2023.109453 – ident: ref60 doi: 10.3115/v1/D14-1162 – ident: ref31 doi: 10.1109/CVPR.2017.502 – volume: 10 start-page: 1755 year: 2009 ident: ref67 article-title: Dlib-ML: A machine learning toolkit publication-title: J. Mach. Learn. Res. – ident: ref46 doi: 10.1109/TAFFC.2020.3031841 – ident: ref35 doi: 10.1109/ICCV.2019.00876 – ident: ref66 doi: 10.1109/TCSVT.2021.3082635 – ident: ref64 doi: 10.1109/CVPR46437.2021.00896 – ident: ref43 doi: 10.1109/CVPR52688.2022.00320 – ident: ref72 doi: 10.1016/j.inffus.2022.03.009 – ident: ref40 doi: 10.1609/aaai.v34i07.6872 – ident: ref2 doi: 10.1109/FG.2019.8756513 – ident: ref3 doi: 10.1109/FG47880.2020.00041 – ident: ref38 doi: 10.1109/TPAMI.2018.2868668 – ident: ref32 doi: 10.1007/978-3-030-01231-1_32 – ident: ref74 doi: 10.1109/TMM.2022.3141616 – ident: ref58 doi: 10.1109/CVPR.2018.00745 – ident: ref36 doi: 10.1109/CVPR.2012.6247801 – ident: ref7 doi: 10.1109/ICME.2019.00182 – ident: ref24 doi: 10.1109/ICASSP40776.2020.9053928 – ident: ref34 doi: 10.1145/3503161.3548363 – ident: ref59 doi: 10.1109/CVPR.2016.90 – ident: ref71 doi: 10.1609/aaai.v35i15.17625 – ident: ref54 doi: 10.1007/s11263-019-01215-y – year: 2022 ident: ref45 article-title: UniFormer: Unified transformer for efficient spatiotemporal representation learning publication-title: arXiv:2201.04676 – ident: ref8 doi: 10.1016/j.patcog.2021.108282 – year: 2023 ident: ref48 article-title: Pyramid self-attention polymerization learning for semi-supervised skeleton-based action recognition publication-title: arXiv:2302.02327 – ident: ref16 doi: 10.1109/TCSS.2022.3223251 – ident: ref50 doi: 10.1109/WACV48630.2021.00301 – ident: ref62 doi: 10.1109/LSP.2021.3116513 – ident: ref27 doi: 10.1109/CVPR.2015.7298698 – ident: ref68 doi: 10.48550/ARXIV.1706.03762 – ident: ref18 doi: 10.48550/ARXIV.1212.0402 – ident: ref56 doi: 10.1109/WACV.2016.7477679 |
SSID | ssj0014847 |
Score | 2.6106763 |
Snippet | Micro-action is an imperceptible non-verbal behaviour characterised by low-intensity movement. It offers insights into the feelings and intentions of... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 6238 |
SubjectTerms | action analysis action recognition Activity recognition body language Body parts Datasets Emotion recognition Emotions Foot Human behavior human behavioral analysis Interviews Labels Legged locomotion Mars Micro-action Psychological assessment Psychology Semantics Source code Task analysis |
Title | Benchmarking Micro-Action Recognition: Dataset, Methods, and Applications |
URI | https://ieeexplore.ieee.org/document/10414076 https://www.proquest.com/docview/3075426702 |
Volume | 34 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV09T8MwED1BpzLwUYooFJSBjaakiWvHbKVQFaR2gBZ1ixzbERKQIpoO8Os5O0lVgUAsUYY4snzne3f23T2AM05kh8sOdanwuUuY8HBLxczlTCXonqPULX_KaEyHU3I3686KYnVbC6O1tslnum1e7V2-msulOSrDHU4wHmB0EzYxcsuLtVZXBiS0bGLoL3TcEIGsrJDx-MWk__A4wVjQJ-0gQMQ1HLhrKGRpVX7YYgswgx0Yl1PL80qe28ssbsvPb10b_z33XdguXE2nl-vGHmzotAZbaw0Ia1Bd2b-Pfbi9Qo19ehX29NwZmUw9t2fLHpz7Ms1onl461yJD6MtazsiyTy9ajkiV01u7Ca_DdHAz6Q_dgmnBlT6nmYugb06CBIkxoFFCc0X9hAYq8UOppfICjZ4fM84L0QlLuNCBJ-PQtHLRaCJocACVdJ7qQ3AY4n8QE5WgBuCzyxWnIZMBUbFWfiwa0ClXPpJFG3LDhvES2XDE45GVVmSkFRXSasD5asxb3oTjz6_rZvnXvsxXvgHNUsJRsVEXEZq4LjopzPOPfhl2DFXz9zxFtwmV7H2pT9ARyeJTq4BfeZzW7g |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED7xGICBRwFRKJCBDVLSxLVjtlKoWmg7QEFskWM7QgJSBOkAv56zk1QVCMQSZYgVy3e--86-uw_giBPZ4LJBXSp87hImPNxSMXM5UwnCc5S65U8ZDGn3jlw9NB-KYnVbC6O1tslnum5e7V2-GsuJOSrDHU4wHmB0HhbR8ROel2tNLw1IaPnEEDE03BBdWVkj4_HTUfv2foTRoE_qQYA-17DgzvghS6zywxpbF9NZg2E5uTyz5Kk-yeK6_PzWt_Hfs1-H1QJsOq1cOzZgTqcVWJlpQViB5akF_NiE3jnq7OOLsOfnzsDk6rktW_jg3JSJRuP0zLkQGTq_7MQZWP7p9xNHpMppzdyFb8Fd53LU7roF14IrfU4zF92-OQsSJMaQRgnNFfUTGqjED6WWygs0Yj9m4AvRCUu40IEn49A0c9FoJGiwDQvpONU74DBEAEFMVII6gM8mV5yGTAZExVr5sahCo1z5SBaNyA0fxnNkAxKPR1ZakZFWVEirCsfTMa95G44_v94yyz_zZb7yVaiVEo6KrfoeoZFrIkxhnr_7y7BDWOqOBv2o3xte78Gy-VOesFuDhextovcRlmTxgVXGL-0l2j4 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Benchmarking+Micro-Action+Recognition%3A+Dataset%2C+Methods%2C+and+Applications&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Guo%2C+Dan&rft.au=Li%2C+Kun&rft.au=Hu%2C+Bin&rft.au=Zhang%2C+Yan&rft.date=2024-07-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=34&rft.issue=7&rft.spage=6238&rft.epage=6252&rft_id=info:doi/10.1109%2FTCSVT.2024.3358415&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2024_3358415 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |