Pairwise Two-Stream ConvNets for Cross-Domain Action Recognition With Small Data

In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets ( PTC ) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pa...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 33; no. 3; pp. 1147 - 1161
Main Authors Gao, Zan, Guo, Leming, Ren, Tongwei, Liu, An-An, Cheng, Zhi-Yong, Chen, Shengyong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets ( PTC ) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets ( SDAI Action I and SDAI Action II ) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.
AbstractList In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets (PTC) algorithm for real-life conditions, in which only a few labeled samples are available. To cope with the limited training sample problem, we employ pairwise network architecture that can leverage training samples from a source domain and, thus, requires only a few labeled samples per category from the target domain. In particular, a frame self-attention mechanism and an adaptive weight scheme are embedded into the PTC network to adaptively combine the RGB and flow features. This design can effectively learn domain-invariant features for both the source and target domains. In addition, we propose a sphere boundary sample-selecting scheme that selects the training samples at the boundary of a class (in the feature space) to train the PTC model. In this way, a well-enhanced generalization capability can be achieved. To validate the effectiveness of our PTC model, we construct two CDAR data sets (SDAI Action I and SDAI Action II) that include indoor and outdoor environments; all actions and samples in these data sets were carefully collected from public action data sets. To the best of our knowledge, these are the first data sets specifically designed for the CDAR task. Extensive experiments were conducted on these two data sets. The results show that PTC outperforms state-of-the-art video action recognition methods in terms of both accuracy and training efficiency. It is noteworthy that when only two labeled training samples per category are used in the SDAI Action I data set, PTC achieves 21.9% and 6.8% improvement in accuracy over two-stream and temporal segment networks models, respectively. As an added contribution, the SDAI Action I and SDAI Action II data sets will be released to facilitate future research on the CDAR task.
Author Gao, Zan
Guo, Leming
Ren, Tongwei
Chen, Shengyong
Liu, An-An
Cheng, Zhi-Yong
Author_xml – sequence: 1
  givenname: Zan
  orcidid: 0000-0003-2182-5741
  surname: Gao
  fullname: Gao, Zan
  organization: Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
– sequence: 2
  givenname: Leming
  orcidid: 0000-0001-7569-6928
  surname: Guo
  fullname: Guo, Leming
  email: pwallguo@163.com
  organization: Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin, China
– sequence: 3
  givenname: Tongwei
  orcidid: 0000-0003-3092-424X
  surname: Ren
  fullname: Ren, Tongwei
  organization: State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
– sequence: 4
  givenname: An-An
  orcidid: 0000-0001-5755-9145
  surname: Liu
  fullname: Liu, An-An
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 5
  givenname: Zhi-Yong
  orcidid: 0000-0003-1109-5028
  surname: Cheng
  fullname: Cheng, Zhi-Yong
  email: jason.zy.cheng@gmail.com
  organization: Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
– sequence: 6
  givenname: Shengyong
  orcidid: 0000-0002-6705-3831
  surname: Chen
  fullname: Chen, Shengyong
  organization: Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33296313$$D View this record in MEDLINE/PubMed
BookMark eNpdkF1LwzAUhoMofu4PKEjBG286T5ImbS5lfsKYw030LqTdqUbaZiad4r-3c3MX5iI5kOe8nPMckO3GNUjIMYU-paAupqPRcNJnwKDPIaFAsy2yz6hkMeNZtr2p05c90gvhHbojQchE7ZI9zpmSnPJ9Mh4b679swGj65eJJ69HU0cA1nyNsQ1Q6Hw28CyG-crWxTXRZtNY10SMW7rWxv_Wzbd-iSW2qKroyrTkiO6WpAvbW7yF5urmeDu7i4cPt_eByGBdc0DbmAAxLoDOQHClFkXPFhSq7qyhhBmzGhMnAYFLQXKUlqiLNWcIkUEhzqvghOV_lzr37WGBodW1DgVVlGnSLoFkiFaRSMtGhZ__Qd7fwTTedZpILEFkqko5iK6pYLuyx1HNva-O_NQW9VK5_leulcr1W3jWdrqMXeY2zTcuf4A44WQEWETffimVZlnL-A55NhHE
CODEN ITNNAL
CitedBy_id crossref_primary_10_1109_TII_2022_3220872
crossref_primary_10_1007_s10489_024_05664_y
crossref_primary_10_1145_3654671
crossref_primary_10_1109_MMUL_2021_3104911
crossref_primary_10_1007_s00521_022_08190_5
crossref_primary_10_1016_j_petsci_2023_04_016
crossref_primary_10_1109_TIP_2022_3182866
crossref_primary_10_1016_j_neucom_2023_126622
crossref_primary_10_1007_s10489_021_02487_z
crossref_primary_10_1016_j_ins_2021_12_088
crossref_primary_10_1016_j_neucom_2024_128087
crossref_primary_10_1007_s11554_023_01374_9
crossref_primary_10_1109_THMS_2023_3266037
crossref_primary_10_1109_TIP_2022_3221292
crossref_primary_10_1109_TNNLS_2022_3202835
crossref_primary_10_1631_FITEE_2200284
crossref_primary_10_1109_TMM_2023_3234362
crossref_primary_10_1145_3656046
crossref_primary_10_1142_S0218001424500058
crossref_primary_10_14801_jkiit_2021_19_3_103
crossref_primary_10_1109_TMM_2023_3295899
crossref_primary_10_1109_ACCESS_2021_3063302
crossref_primary_10_1007_s11276_023_03267_y
crossref_primary_10_1109_LSP_2021_3061289
crossref_primary_10_1109_TIP_2023_3341297
crossref_primary_10_1109_TKDE_2022_3187091
crossref_primary_10_1109_TCSS_2022_3187198
Cites_doi 10.1109/ICCV.2005.28
10.1145/2393347.2396381
10.1145/3377876
10.1145/3240508.3240512
10.1109/CVPR.2018.00288
10.1109/ICCV.2019.00642
10.1109/ICCV.2015.510
10.1109/ICCV.2017.609
10.1109/TNNLS.2014.2330900
10.1109/CVPR.2009.5206848
10.1109/CVPR.2016.115
10.1109/TKDE.2017.2669193
10.1109/CVPR.2017.502
10.1016/S0893-6080(98)00116-6
10.7551/mitpress/7503.003.0022
10.1109/TMM.2020.3023784
10.1109/CVPR.2012.6247911
10.1109/CVPR.2018.00392
10.3156/jsoft.29.5_177_2
10.1109/JIOT.2019.2911669
10.1109/ICCV.2011.6126543
10.1109/CVPRW.2012.6239233
10.1109/TKDE.2009.191
10.1609/aaai.v34i07.6854
10.1109/CVPR.2014.183
10.1109/CVPR.2019.00371
10.1109/CVPRW.2012.6239234
10.1016/j.neunet.2020.02.017
10.1093/bioinformatics/btl242
10.1007/978-3-030-01246-5_40
10.1007/s11263-015-0876-z
10.1109/ICPR.2004.1334462
10.1109/TNNLS.2018.2874567
10.1109/CVPR.2011.5995729
10.1109/TNNLS.2017.2740318
10.1109/ICCV.2013.274
10.1109/CVPR.2008.4587756
10.1145/3206025.3206028
10.1109/CVPR.2017.316
10.1109/ICCVW.2019.00349
10.1109/ICCV.2017.590
10.1109/ICCVW.2019.00169
10.1109/TPAMI.2018.2868668
10.1609/aaai.v33i01.33013296
10.1007/s11042-018-5833-8
10.1145/1015330.1015424
10.1109/ACCESS.2018.2878313
10.1109/ICCV.2013.441
10.1109/TPAMI.2011.114
10.1109/CVPR.2019.00132
10.1109/CVPR.2015.7298594
10.1109/TCYB.2016.2582918
10.1145/3123266.3123432
10.1109/TNNLS.2018.2886008
10.1109/CVPR.2019.00035
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
NPM
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2020.3041018
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library Online
PubMed
CrossRef
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle PubMed
CrossRef
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList PubMed
Materials Research Database

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 1161
ExternalDocumentID 10_1109_TNNLS_2020_3041018
33296313
9288873
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Key Research and Development Program of China
  grantid: 2019YFBB1404700
  funderid: 10.13039/501100012166
– fundername: National Natural Science Foundation of China
  grantid: 61872270; 62020106004; 61572357
  funderid: 10.13039/501100001809
– fundername: Young creative team in universities of Shandong Province
  grantid: 2020KJN012
– fundername: Tianjin New Generation Artificial Intelligence Major Program
  grantid: 18ZXZNGX00150; 19ZXZNGX00110
– fundername: Jinan 20 projects in universities
  grantid: 2018GXRC014
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AASAJ
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AKJIK
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RIG
RNS
NPM
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c351t-3002ef01d063e11e5b39359f935cf0d02d25a80ae4c1b97fe9c7b24260107b193
IEDL.DBID RIE
ISSN 2162-237X
IngestDate Thu Jul 25 11:20:33 EDT 2024
Thu Oct 10 14:46:08 EDT 2024
Fri Aug 23 03:38:30 EDT 2024
Sun Jul 28 06:57:14 EDT 2024
Wed Jun 26 19:25:48 EDT 2024
IsPeerReviewed false
IsScholarly true
Issue 3
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-3002ef01d063e11e5b39359f935cf0d02d25a80ae4c1b97fe9c7b24260107b193
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-7569-6928
0000-0001-5755-9145
0000-0002-6705-3831
0000-0003-2182-5741
0000-0003-3092-424X
0000-0003-1109-5028
PMID 33296313
PQID 2635058754
PQPubID 85436
PageCount 15
ParticipantIDs ieee_primary_9288873
proquest_journals_2635058754
proquest_miscellaneous_2469076625
pubmed_primary_33296313
crossref_primary_10_1109_TNNLS_2020_3041018
PublicationCentury 2000
PublicationDate 2022-03-01
PublicationDateYYYYMMDD 2022-03-01
PublicationDate_xml – month: 03
  year: 2022
  text: 2022-03-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref12
ref56
ref15
ref59
ref14
Wang (ref19) 2019
ref52
ref11
ref55
ref10
ref54
ref17
ref18
ref51
ref50
Simonyan (ref16)
ref46
ref45
ref48
ref47
ref42
ref41
ref44
Iii (ref24)
Jamal (ref53)
Fang (ref43) 2018
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
Kay (ref63) 2017
ref35
ref34
ref36
ref30
ref33
ref32
ref2
ref1
Soomro (ref13) 2012
Long (ref31)
Finn (ref38)
ref23
ref67
ref26
ref25
ref20
ref64
ref22
ref66
ref21
ref65
Zeiler (ref58) 2012
ref28
Koch (ref37); 2
ref27
ref29
ref60
ref62
ref61
Bishay (ref39)
References_xml – ident: ref11
  doi: 10.1109/ICCV.2005.28
– start-page: 1640
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref31
  article-title: Conditional adversarial domain adaptation
  contributor:
    fullname: Long
– ident: ref61
  doi: 10.1145/2393347.2396381
– ident: ref34
  doi: 10.1145/3377876
– ident: ref46
  doi: 10.1145/3240508.3240512
– ident: ref44
  doi: 10.1109/CVPR.2018.00288
– ident: ref55
  doi: 10.1109/ICCV.2019.00642
– ident: ref3
  doi: 10.1109/ICCV.2015.510
– ident: ref29
  doi: 10.1109/ICCV.2017.609
– ident: ref42
  doi: 10.1109/TNNLS.2014.2330900
– ident: ref67
  doi: 10.1109/CVPR.2009.5206848
– ident: ref18
  doi: 10.1109/CVPR.2016.115
– ident: ref28
  doi: 10.1109/TKDE.2017.2669193
– year: 2019
  ident: ref19
  article-title: Few-shot learning: A survey
  publication-title: arXiv:1904.05046
  contributor:
    fullname: Wang
– ident: ref15
  doi: 10.1109/CVPR.2017.502
– start-page: 154
  volume-title: Proc. 30th Brit. Mach. Vis. Conf. (BMVC)
  ident: ref39
  article-title: TARN: Temporal attentive relation network for few-shot and zero-shot action recognition
  contributor:
    fullname: Bishay
– ident: ref57
  doi: 10.1016/S0893-6080(98)00116-6
– ident: ref23
  doi: 10.7551/mitpress/7503.003.0022
– ident: ref36
  doi: 10.1109/TMM.2020.3023784
– ident: ref48
  doi: 10.1109/CVPR.2012.6247911
– ident: ref51
  doi: 10.1109/CVPR.2018.00392
– start-page: 264
  volume-title: Proc. Brit. Mach. Vis. Conf. (BMVC)
  ident: ref53
  article-title: Deep domain adaptation in action space
  contributor:
    fullname: Jamal
– year: 2012
  ident: ref13
  article-title: UCF101: A dataset of 101 human actions classes from videos in the wild
  publication-title: arXiv:1212.0402
  contributor:
    fullname: Soomro
– ident: ref50
  doi: 10.3156/jsoft.29.5_177_2
– ident: ref4
  doi: 10.1109/JIOT.2019.2911669
– start-page: 1126
  volume-title: Proc. IEEE Int. Conf. Mach. Learn.
  ident: ref38
  article-title: Model-agnostic meta-learning for fast adaptation of deep networks
  contributor:
    fullname: Finn
– ident: ref14
  doi: 10.1109/ICCV.2011.6126543
– ident: ref32
  doi: 10.1109/CVPRW.2012.6239233
– ident: ref22
  doi: 10.1109/TKDE.2009.191
– ident: ref65
  doi: 10.1609/aaai.v34i07.6854
– ident: ref47
  doi: 10.1109/CVPR.2014.183
– ident: ref6
  doi: 10.1109/CVPR.2019.00371
– ident: ref64
  doi: 10.1109/CVPRW.2012.6239234
– ident: ref35
  doi: 10.1016/j.neunet.2020.02.017
– ident: ref59
  doi: 10.1093/bioinformatics/btl242
– ident: ref21
  doi: 10.1007/978-3-030-01246-5_40
– ident: ref62
  doi: 10.1007/s11263-015-0876-z
– ident: ref12
  doi: 10.1109/ICPR.2004.1334462
– volume: 2
  start-page: 1
  volume-title: Proc. ICML Deep Learn. Workshop
  ident: ref37
  article-title: Siamese neural networks for one-shot image recognition
  contributor:
    fullname: Koch
– ident: ref49
  doi: 10.1109/TNNLS.2018.2874567
– ident: ref66
  doi: 10.1109/CVPR.2011.5995729
– ident: ref10
  doi: 10.1109/TNNLS.2017.2740318
– ident: ref27
  doi: 10.1109/ICCV.2013.274
– year: 2017
  ident: ref63
  article-title: The kinetics human action video dataset
  publication-title: arXiv:1705.06950
  contributor:
    fullname: Kay
– start-page: 1
  volume-title: Proc. Asian Conf. Lang. (ACL)
  ident: ref24
  article-title: Frustratingly easy domain adaptation
  contributor:
    fullname: Iii
– year: 2012
  ident: ref58
  article-title: ADADELTA: An adaptive learning rate method
  publication-title: arXiv:1212.5701
  contributor:
    fullname: Zeiler
– ident: ref1
  doi: 10.1109/CVPR.2008.4587756
– ident: ref20
  doi: 10.1145/3206025.3206028
– ident: ref30
  doi: 10.1109/CVPR.2017.316
– ident: ref54
  doi: 10.1109/ICCVW.2019.00349
– ident: ref5
  doi: 10.1109/ICCV.2017.590
– ident: ref41
  doi: 10.1109/ICCVW.2019.00169
– start-page: 568
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref16
  article-title: Two-stream convolutional networks for action recognition in videos
  contributor:
    fullname: Simonyan
– ident: ref17
  doi: 10.1109/TPAMI.2018.2868668
– ident: ref45
  doi: 10.1609/aaai.v33i01.33013296
– ident: ref60
  doi: 10.1007/s11042-018-5833-8
– ident: ref25
  doi: 10.1145/1015330.1015424
– ident: ref52
  doi: 10.1109/ACCESS.2018.2878313
– year: 2018
  ident: ref43
  article-title: DART: Domain-adversarial residual-transfer networks for unsupervised cross-domain image classification
  publication-title: arXiv:1812.11478
  contributor:
    fullname: Fang
– ident: ref2
  doi: 10.1109/ICCV.2013.441
– ident: ref26
  doi: 10.1109/TPAMI.2011.114
– ident: ref7
  doi: 10.1109/CVPR.2019.00132
– ident: ref56
  doi: 10.1109/CVPR.2015.7298594
– ident: ref33
  doi: 10.1109/TCYB.2016.2582918
– ident: ref40
  doi: 10.1145/3123266.3123432
– ident: ref9
  doi: 10.1109/TNNLS.2018.2886008
– ident: ref8
  doi: 10.1109/CVPR.2019.00035
SSID ssj0000605649
Score 2.5775511
Snippet In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets ( PTC )...
In this work, we target cross-domain action recognition (CDAR) in the video domain and propose a novel end-to-end pairwise two-stream ConvNets (PTC) algorithm...
SourceID proquest
crossref
pubmed
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 1147
SubjectTerms Action recognition
Activity recognition
adaptive weight
Algorithms
Computer architecture
cross-domain learning
Data models
Datasets
Domains
Feature extraction
Indoor environments
Kernel
Learning systems
pairwise two-stream ConvNets
small data
Surveillance
Target recognition
Task analysis
Training
Title Pairwise Two-Stream ConvNets for Cross-Domain Action Recognition With Small Data
URI https://ieeexplore.ieee.org/document/9288873
https://www.ncbi.nlm.nih.gov/pubmed/33296313
https://www.proquest.com/docview/2635058754
https://search.proquest.com/docview/2469076625
Volume 33
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PaBeKFAegYKMxA289SOPzbHaUlWIriq6FXuL7HiirugmqJulEr-esfOQQCBxiSLFcmLPTOb77JkxwDuXpXGlteWpsYrHZZlysiLBkxTJHWjpyhDyfzFPz6_jT8tkuQMfxlwYRAzBZzjxt2Ev3zXl1i-VHeeK-Fqmd2E3y_MuV2tcTxGEy9OAdpVMFVc6Ww45MiI_Xsznn6-IDSoiqSL2Var24YHWitRP6t9cUjhj5d9wM7idswO4GD64izb5Ntm2dlL-_KOW4_-O6BE87PEnO-kU5jHsYP0EDoazHVhv6odweWlWd_erDbLFfcP93rVZs1lT_5hju2GEdNnMD4afNmuzqtlJSI9gX4ZwJLr_umpv2NXa3N6yU9Oap3B99nExO-f96Qu81IlsuaZ_JVZCOgIxKCUmNmTxVnQpK-GEcioxU2EwLqXNswrzMrPe4RPDyyzhwmewVzc1vgCmrUMCGoiVdHFshe9E5MTpk6kTU-MieD8IoPjeFdkoAjkReREkV3jJFb3kIjj0Ezm27OcwgqNBZkVvh5vCl9oRCXGyOIK342OyIL8tYmpsttQmrBCkRAQjeN7Jeux7UJGXf3_nK9hXPh0ixKQdwV57t8XXBFJa-yZo5y_O0999
link.rule.ids 315,783,787,799,27936,27937,55086
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwED-NIcFeGDBggQFG4g3cOXY-msepYyrQRhPrRN8if0VUrMm0pkzir-fsfEggkHiJIsVyYt9d7vez784Ab02aRKUQiiZScRppnVC0IkbjxKI7EKHRPuR_nifTy-jTMl7uwPshF8Za64PP7Mjd-r18U-utWyo7zjjytVTcgbuIq8dJm601rKgwROaJx7s8TDjlIl32WTIsO17k-ewC-SBHmsoiV6dqD-4JwVEBQ_GbU_KnrPwbcHrHc7YP8_6T23iT76Nto0b65x_VHP93TA_hQYdAyUmrMo9gx1aPYb8_3YF0xn4A5-dydXO72liyuK2p272WazKpqx-5bTYEsS6ZuMHQ03otVxU58QkS5EsfkIT3X1fNN3KxlldX5FQ28glcnn1YTKa0O3-BahGHDRX4t7QlCw3CGBuGNlY-j7fEiy6ZYdzwWI6ZtJEOVZaWNtOpci4fOV6qEBk-hd2qruwhEKGMRahhbRmaKFLMdcIyZPXx2LCxNAG86wVQXLdlNgpPT1hWeMkVTnJFJ7kADtxEDi27OQzgqJdZ0VnipnDFdliMrCwK4M3wGG3IbYzIytZbbOPXCBKkggE8a2U99N2ryPO_v_M13J8u5rNi9jH__AL2uEuO8BFqR7Db3GztS4QsjXrlNfUXqWDiyA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pairwise+Two-Stream+ConvNets+for+Cross-Domain+Action+Recognition+With+Small+Data&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Gao%2C+Zan&rft.au=Guo%2C+Leming&rft.au=Ren%2C+Tongwei&rft.au=Liu%2C+An-An&rft.date=2022-03-01&rft.eissn=2162-2388&rft.volume=33&rft.issue=3&rft.spage=1147&rft.epage=1161&rft_id=info:doi/10.1109%2FTNNLS.2020.3041018&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon