On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition
In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 29; no. 9; pp. 2708 - 2719 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.09.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences. |
---|---|
AbstractList | In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider two aspects: verification versus identification and the tradeoff between spatial displacements caused by subject difference and view difference. More specifically, we use the Siamese network with a pair of inputs and contrastive loss for verification and a triplet network with a triplet of inputs and triplet ranking loss for identification. The aforementioned CNN architectures are insensitive to spatial displacement, because the difference between a matching pair is calculated at the last layer after passing through the convolution and max pooling layers; hence, they are expected to work relatively well under large view differences. By contrast, because it is better to use the spatial displacement to its best advantage because of the subject difference under small view differences, we also use CNN architectures where the difference between a matching pair is calculated at the input level to make them more sensitive to spatial displacement. We conducted experiments for cross-view gait recognition and confirmed that the proposed architectures outperformed the state-of-the-art benchmarks in accordance with their suitable situations of verification/identification tasks and view differences. |
Author | Makihara, Yasushi Takemura, Noriko Yagi, Yasushi Echigo, Tomio Muramatsu, Daigo |
Author_xml | – sequence: 1 givenname: Noriko orcidid: 0000-0003-1977-4690 surname: Takemura fullname: Takemura, Noriko email: takemura@am.sanken.osaka-u.ac.jp organization: Mitsubishi Electric Collaborative Research Division for Wide-Area Security Technology, Institute of the Scientific and Industrial Research, Osaka University, Osaka, Japan – sequence: 2 givenname: Yasushi surname: Makihara fullname: Makihara, Yasushi email: makihara@am.sanken.osaka-u.ac.jp organization: Mitsubishi Electric Collaborative Research Division for Wide-Area Security Technology, Institute of the Scientific and Industrial Research, Osaka University, Osaka, Japan – sequence: 3 givenname: Daigo surname: Muramatsu fullname: Muramatsu, Daigo email: muramatsu@am.sanken.osaka-u.ac.jp organization: Mitsubishi Electric Collaborative Research Division for Wide-Area Security Technology, Institute of the Scientific and Industrial Research, Osaka University, Osaka, Japan – sequence: 4 givenname: Tomio surname: Echigo fullname: Echigo, Tomio email: echigo@osakac.ac.jp organization: Department of Engineering Informatics, Osaka Electro-Communication University, Osaka, Japan – sequence: 5 givenname: Yasushi surname: Yagi fullname: Yagi, Yasushi email: yagi@am.sanken.osaka-u.ac.jp organization: Mitsubishi Electric Collaborative Research Division for Wide-Area Security Technology, Institute of the Scientific and Industrial Research, Osaka University, Osaka, Japan |
BookMark | eNp9kM1OwzAQhC1UJNrCC8AlEue0_okd-1giKJUqKkHpNbiOAy4hLrZDxduTtBUHDpxmD_Ptzs4A9GpbawAuERwhBMV4mT2tliMMUTrCKYOc0BPQR5TyGGNIe-0MKYo5RvQMDLzfQIgSnqR98LKoo1m9bcJ40YRWoolTbyZoFRqnfVRaF2W2_rJVE4ytZRU96MbtJeyse49vpNdFlDnrfbwyehdNpQnRo1b2tTYdcg5OS1l5fXHUIXi-u11m9_F8MZ1lk3msCEMhVimRFDNOFJaiEGtM1qxEZQHZmkCRYI5FKRAuCi0SASlGPBValEwlBceMcjIE14e9W2c_G-1DvrGNaxP7HGNOCRGQs9aFDy7VJXa6zLfOfEj3nSOYd03m-ybzrsn82GQL8T-QMkF2zwUnTfU_enVAjdb69xaHjJAkIT8REINc |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1007_s11042_018_6045_y crossref_primary_10_1007_s11042_020_10071_9 crossref_primary_10_4018_IJBDIA_287616 crossref_primary_10_1109_TIFS_2018_2844819 crossref_primary_10_1109_TPAMI_2021_3057879 crossref_primary_10_1109_JIOT_2019_2953488 crossref_primary_10_1109_ACCESS_2024_3445415 crossref_primary_10_3390_s23104875 crossref_primary_10_1007_s11227_023_05143_0 crossref_primary_10_1109_TIP_2021_3055936 crossref_primary_10_1016_j_nanoen_2020_105246 crossref_primary_10_1109_TNNLS_2022_3154723 crossref_primary_10_1016_j_eswa_2022_117730 crossref_primary_10_3390_jimaging10120326 crossref_primary_10_1109_ACCESS_2024_3443231 crossref_primary_10_1109_JIOT_2023_3301908 crossref_primary_10_1016_j_neucom_2022_07_002 crossref_primary_10_1007_s10489_024_05422_0 crossref_primary_10_1109_TBIOM_2022_3174559 crossref_primary_10_1109_TIFS_2020_2985535 crossref_primary_10_1109_TIFS_2019_2912577 crossref_primary_10_1049_ipr2_12024 crossref_primary_10_1109_ACCESS_2023_3266252 crossref_primary_10_1109_TIFS_2020_2985628 crossref_primary_10_3233_AIC_230121 crossref_primary_10_1049_iet_bmt_2018_5063 crossref_primary_10_1007_s11042_019_7712_3 crossref_primary_10_1016_j_procs_2021_10_022 crossref_primary_10_26599_TST_2023_9010089 crossref_primary_10_1007_s11760_024_03765_2 crossref_primary_10_1016_j_neucom_2021_08_054 crossref_primary_10_1109_TIFS_2024_3382606 crossref_primary_10_7717_peerj_cs_996 crossref_primary_10_1111_coin_12361 crossref_primary_10_1016_j_neucom_2024_128313 crossref_primary_10_1109_TMC_2023_3310508 crossref_primary_10_1007_s00521_020_04811_z crossref_primary_10_1109_TCBB_2019_2951146 crossref_primary_10_1109_TMC_2021_3052314 crossref_primary_10_1186_s41074_018_0041_z crossref_primary_10_1007_s00521_019_04524_y crossref_primary_10_1109_TCSVT_2022_3202531 crossref_primary_10_1109_TCSVT_2022_3175959 crossref_primary_10_1109_TGRS_2024_3379376 crossref_primary_10_1109_ACCESS_2024_3510718 crossref_primary_10_1109_TBIOM_2024_3384704 crossref_primary_10_1371_journal_pdig_0000668 crossref_primary_10_1007_s10489_022_03818_4 crossref_primary_10_1109_JIOT_2022_3203559 crossref_primary_10_3390_ai3020031 crossref_primary_10_1007_s10462_022_10365_4 crossref_primary_10_1109_ACCESS_2022_3168019 crossref_primary_10_1016_j_neucom_2019_02_025 crossref_primary_10_1109_JSTSP_2023_3271827 crossref_primary_10_1109_TIP_2019_2926208 crossref_primary_10_1007_s11042_019_7638_9 crossref_primary_10_1109_ACCESS_2020_3047266 crossref_primary_10_1109_TPAMI_2022_3151865 crossref_primary_10_1049_cit2_12051 crossref_primary_10_32604_cmc_2021_017275 crossref_primary_10_1186_s41074_019_0054_2 crossref_primary_10_1007_s42235_021_00083_y crossref_primary_10_1109_ACCESS_2020_2997814 crossref_primary_10_1007_s11042_024_18859_9 crossref_primary_10_1049_cvi2_12070 crossref_primary_10_1109_TCSVT_2020_2975671 crossref_primary_10_1134_S0361768819040091 crossref_primary_10_1002_itl2_379 crossref_primary_10_1016_j_patrec_2024_06_031 crossref_primary_10_1109_TBIOM_2021_3074963 crossref_primary_10_1109_TBIOM_2022_3216857 crossref_primary_10_3390_s22197362 crossref_primary_10_1109_TPAMI_2022_3183288 crossref_primary_10_1109_JSEN_2019_2928777 crossref_primary_10_1109_TCSVT_2024_3476384 crossref_primary_10_1109_TCSVT_2021_3095290 crossref_primary_10_1007_s11042_022_12665_x crossref_primary_10_1109_ACCESS_2024_3482430 crossref_primary_10_1016_j_inffus_2022_10_032 crossref_primary_10_1109_TVT_2021_3111600 crossref_primary_10_1109_TIFS_2023_3236181 crossref_primary_10_3390_s22155682 crossref_primary_10_1109_TIFS_2024_3428371 crossref_primary_10_1109_ACCESS_2021_3102936 crossref_primary_10_1109_ACCESS_2020_3044580 crossref_primary_10_1007_s11831_019_09375_3 crossref_primary_10_1007_s10489_021_02322_5 crossref_primary_10_1109_JSEN_2023_3248868 crossref_primary_10_1016_j_eswa_2024_123181 crossref_primary_10_1109_TCSVT_2024_3360232 crossref_primary_10_1109_TCSVT_2019_2893736 crossref_primary_10_3390_s22239113 |
Cites_doi | 10.1109/BTAS.2014.6996272 10.1109/TSMCB.2009.2031091 10.1109/CVPR.2014.180 10.1109/ICB.2016.7550060 10.1109/TPAMI.2006.38 10.1016/j.patrec.2010.05.027 10.1109/TMM.2015.2477681 10.1109/CVPRW.2006.216 10.1109/ICCV.2015.320 10.1109/TCSVT.2012.2186744 10.1049/iet-bmt.2014.0042 10.1109/ICASSP.2016.7472194 10.1109/TPAMI.2016.2545669 10.1109/TPAMI.2011.260 10.1109/TIP.2014.2371335 10.1111/j.1556-4029.2011.01793.x 10.1109/CVPR.2015.7298594 10.1007/978-3-319-59147-6_23 10.1049/iet-bmt.2013.0090 10.1109/BTAS.2013.6712705 10.1109/TIFS.2012.2204253 10.1109/ICIP.2016.7533144 10.1109/CVPR.2014.223 10.2197/ipsjtcva.5.163 10.1109/34.598228 10.1109/ICB.2016.7550090 10.1109/TCYB.2015.2452577 10.1109/CVPR.2011.5995598 10.1109/TPAMI.2014.2366766 10.1016/j.patcog.2014.06.010 10.1109/CVPR.2005.202 10.1109/AVSS.2003.1217914 10.1016/S0031-3203(03)00239-5 10.1109/ICCVW.2009.5457587 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2017.2760835 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2719 |
ExternalDocumentID | 10_1109_TCSVT_2017_2760835 8063344 |
Genre | orig-research |
GrantInformation_xml | – fundername: Japan Society for the Promotion of Science grantid: JP15H01693 funderid: 10.13039/501100001691 – fundername: Core Research for Evolutional Science and Technology funderid: 10.13039/501100003382 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 10:12:28 EDT 2025 Tue Jul 01 00:41:11 EDT 2025 Thu Apr 24 23:07:31 EDT 2025 Wed Aug 27 02:46:12 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c361t-c73a52683c2a9d9b23b6f1fd06b30942829f912dde9490521879e9f6c4d826583 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-1977-4690 |
PQID | 2285339086 |
PQPubID | 85433 |
PageCount | 12 |
ParticipantIDs | ieee_primary_8063344 crossref_primary_10_1109_TCSVT_2017_2760835 proquest_journals_2285339086 crossref_citationtrail_10_1109_TCSVT_2017_2760835 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-09-01 |
PublicationDateYYYYMMDD | 2019-09-01 |
PublicationDate_xml | – month: 09 year: 2019 text: 2019-09-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 ref34 ref12 ref37 ref15 ref36 ref14 bousquet (ref31) 2008 ref11 yu (ref20) 2006; 4 ref10 otsu (ref40) 1982 ref2 ref1 ref39 ref17 ref38 ref16 ref19 ref18 liu (ref22) 2004; 1 ref24 srivastava (ref32) 2014; 15 nair (ref29) 2010 ref23 ref26 ref25 ref42 ref41 ref21 takemura (ref33) 2016; j99 a glorot (ref30) 2010 makihara (ref6) 2006 ref28 ref27 ref8 ref7 ref9 ref4 ref3 ref5 |
References_xml | – start-page: 249 year: 2010 ident: ref30 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: Proc Artif Intell Statist (AISTATS) Conf – ident: ref10 doi: 10.1109/BTAS.2014.6996272 – ident: ref5 doi: 10.1109/TSMCB.2009.2031091 – start-page: 557 year: 1982 ident: ref40 article-title: Optimal linear and nonlinear solutions for least-square discriminant feature extraction publication-title: Proc 6th Int Conf Pattern Recognit – ident: ref25 doi: 10.1109/CVPR.2014.180 – ident: ref15 doi: 10.1109/ICB.2016.7550060 – ident: ref8 doi: 10.1109/TPAMI.2006.38 – ident: ref9 doi: 10.1016/j.patrec.2010.05.027 – ident: ref13 doi: 10.1109/TMM.2015.2477681 – ident: ref35 doi: 10.1109/CVPRW.2006.216 – ident: ref26 doi: 10.1109/ICCV.2015.320 – ident: ref7 doi: 10.1109/TCSVT.2012.2186744 – ident: ref38 doi: 10.1049/iet-bmt.2014.0042 – volume: j99 a start-page: 440 year: 2016 ident: ref33 article-title: View-invariant gait recognition using convolutional neural network publication-title: IEICE Trans Fundam – ident: ref16 doi: 10.1109/ICASSP.2016.7472194 – ident: ref14 doi: 10.1109/TPAMI.2016.2545669 – ident: ref24 doi: 10.1109/TPAMI.2011.260 – ident: ref37 doi: 10.1109/TIP.2014.2371335 – ident: ref1 doi: 10.1111/j.1556-4029.2011.01793.x – ident: ref42 doi: 10.1109/CVPR.2015.7298594 – ident: ref23 doi: 10.1007/978-3-319-59147-6_23 – ident: ref2 doi: 10.1049/iet-bmt.2013.0090 – ident: ref19 doi: 10.1109/BTAS.2013.6712705 – start-page: 807 year: 2010 ident: ref29 article-title: Rectified linear units improve restricted boltzmann machines publication-title: Proc 27th Int Conf Mach Learn (ICML) – volume: 4 start-page: 441 year: 2006 ident: ref20 article-title: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition publication-title: Proc 18th Int Conf Pattern Recognit – ident: ref21 doi: 10.1109/TIFS.2012.2204253 – ident: ref17 doi: 10.1109/ICIP.2016.7533144 – ident: ref28 doi: 10.1109/CVPR.2014.223 – ident: ref3 doi: 10.2197/ipsjtcva.5.163 – volume: 15 start-page: 1929 year: 2014 ident: ref32 article-title: Dropout: A simple way to prevent neural networks from overfitting publication-title: J Mach Learn Res – ident: ref41 doi: 10.1109/34.598228 – start-page: 161 year: 2008 ident: ref31 article-title: The tradeoffs of large scale learning publication-title: Advances in neural information processing systems – ident: ref34 doi: 10.1109/ICB.2016.7550090 – start-page: 151 year: 2006 ident: ref6 article-title: Gait recognition using a view transformation model in the frequency domain publication-title: Proc 9th Eur Conf Comput Vis – ident: ref39 doi: 10.1109/TCYB.2015.2452577 – ident: ref27 doi: 10.1109/CVPR.2011.5995598 – ident: ref11 doi: 10.1109/TPAMI.2014.2366766 – ident: ref12 doi: 10.1016/j.patcog.2014.06.010 – ident: ref18 doi: 10.1109/CVPR.2005.202 – ident: ref4 doi: 10.1109/AVSS.2003.1217914 – volume: 1 start-page: 211 year: 2004 ident: ref22 article-title: Simplest representation yet for gait recognition: Averaged silhouette publication-title: Proc 17th Int Conf Pattern Recognit doi: 10.1016/S0031-3203(03)00239-5 – ident: ref36 doi: 10.1109/ICCVW.2009.5457587 |
SSID | ssj0014847 |
Score | 2.6053076 |
Snippet | In this paper, we discuss input/output architectures for convolutional neural network (CNN)-based cross-view gait recognition. For this purpose, we consider... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2708 |
SubjectTerms | Artificial neural networks Convolution Convolutional neural network cross-view Displacement Gait recognition Matching Mathematical analysis Network architecture Neural networks Performance evaluation Probes Robustness |
Title | On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition |
URI | https://ieeexplore.ieee.org/document/8063344 https://www.proquest.com/docview/2285339086 |
Volume | 29 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JSwMxFH7YnvTgVsW6kYM3TdtJxmly1GKtgha0FW9jswwUZSp2RsFf70s6U9wQT5lDEjJ8Sd738jaAA4FXnDFKUaNli4Y6kFSqSNHEKmfXMSjj3Tvk1XXUG4aX98f3C3A0j4Wx1nrnM9twn96WbyY6d09lTYHylIdhBSqouM1iteYWg1D4YmJIFwIqUI6VATIt2Rx0bu8Gzour3WDtyHGOL0LIV1X5cRV7-dJdgatyZTO3ksdGnqmGfv-WtPG_S1-F5YJokpPZzliDBZuuw9Kn9IM1eOin5CJ9zrNmP8-wISefrApTgnSWdCbpa7E5cTKXycM33nWcnqIENKTjfpHeje0bOR-NM3JTuiRN0g0Yds8GnR4tKi5QzaMgo7rNRy7_C9dsJI1UjKsoCRLTihRHPdBZXRMZMLwSZShd2K9oSyuTSIcG1ZRjwTehmk5SuwWEac6ZVaOERyYMlBGGBUqjuoScxghh6xCUEMS6SEfuqmI8xV4tacnYwxY72OICtjoczsc8z5Jx_Nm75nCY9ywgqMNuiXRcnNdpzBjSFi5Rv9v-fdQOLOLchXfZLlSzl9zuIR3J1L7fhx-b49uC |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3Nb9MwFH8q4wAcNqAgOjbmAzdwW9tpah9HtdKNfkjQVr2F-iNSxZRWawISfz3PblKVgRAn52Antn6Of-_5fQG8lXjEWas1tUa1aWSYokrHmqZOe7uORY7395CjcTyYRTeLzqIG7_exMM654Hzmmv4x2PLt2hT-qqwlkU9FFD2Ah8j7HbaL1trbDCIZyomhwMCoRCarQmTaqjXtfZlPvR9Xt8m7sZc6fqOhUFflj8M4MEz_BEbV3HaOJd-aRa6b5ue9tI3_O_mncFyKmuRytzeeQc1lz-HJQQLCOnydZOQ62xR5a1Lk2JDLA7vClqBAS3rr7Hu5PfFlPpdHaILzOP2AHGhJzy-RzlfuB_m4XOXkc-WUtM5ewKx_Ne0NaFlzgRoRs5yarlj6DDDC8KWySnOh45Slth1rgZqgt7uminE8FFWkfOCv7Cqn0thEFhWVjhQv4ShbZ-4VEG6E4E4vUxHbiGkrLWfaoMKEUo2V0jWAVRAkpkxI7uti3CZBMWmrJMCWeNiSErYGvNuP2ezScfyzd93jsO9ZQtCAswrppPxjtwnnKLgIhRre6d9HXcCjwXQ0TIbX40-v4TF-p_Q1O4Oj_K5w5yic5PpN2JO_ADiB3ss |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=On+Input%2FOutput+Architectures+for+Convolutional+Neural+Network-Based+Cross-View+Gait+Recognition&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Takemura%2C+Noriko&rft.au=Makihara%2C+Yasushi&rft.au=Muramatsu%2C+Daigo&rft.au=Echigo%2C+Tomio&rft.date=2019-09-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=29&rft.issue=9&rft.spage=2708&rft.epage=2719&rft_id=info:doi/10.1109%2FTCSVT.2017.2760835&rft.externalDocID=8063344 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |