Gait-Based Person Recognition Using Arbitrary View Transformation Model

Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views a...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 24; no. 1; pp. 140 - 154
Main Authors Muramatsu, Daigo, Shiraishi, Akira, Makihara, Yasushi, Uddin, Md Zasim, Yagi, Yasushi
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2014.2371335

Cover

Loading…
Abstract Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.
AbstractList Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.
Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.
Author Makihara, Yasushi
Shiraishi, Akira
Yagi, Yasushi
Uddin, Md Zasim
Muramatsu, Daigo
Author_xml – sequence: 1
  givenname: Daigo
  surname: Muramatsu
  fullname: Muramatsu, Daigo
  email: muramatsu@am.sanken.osaka-u.ac.jp
  organization: Inst. of the Sci. & Ind. Res., Osaka Univ., Ibaraki, Japan
– sequence: 2
  givenname: Akira
  surname: Shiraishi
  fullname: Shiraishi, Akira
  email: shiraishi@am.sanken.osaka-u.ac.jp
  organization: Inst. of the Sci. & Ind. Res., Osaka Univ., Ibaraki, Japan
– sequence: 3
  givenname: Yasushi
  surname: Makihara
  fullname: Makihara, Yasushi
  email: makihara@am.sanken.osaka-u.ac.jp
  organization: Inst. of the Sci. & Ind. Res., Osaka Univ., Ibaraki, Japan
– sequence: 4
  givenname: Md Zasim
  surname: Uddin
  fullname: Uddin, Md Zasim
  email: zasim@am.sanken.osaka-u.ac.jp
  organization: Inst. of the Sci. & Ind. Res., Osaka Univ., Ibaraki, Japan
– sequence: 5
  givenname: Yasushi
  surname: Yagi
  fullname: Yagi, Yasushi
  email: yagi@am.sanken.osaka-u.ac.jp
  organization: Inst. of the Sci. & Ind. Res., Osaka Univ., Ibaraki, Japan
BackLink https://www.ncbi.nlm.nih.gov/pubmed/25423652$$D View this record in MEDLINE/PubMed
BookMark eNp9kctLxDAQxoMovu-CIAUvXrrmneaoi66CosjqNaTpVCLdRpMu4n9v3IcHD55mGH7fMN98e2izDz0gdETwiBCsz6e3jyOKCR9RpghjYgPtEs1JiTGnm7nHQpWKcL2D9lJ6w5kURG6jHSo4ZVLQXTSZWD-UlzZBUzxCTKEvnsCF194PPvfPyfevxUWs_RBt_CpePHwW02j71IY4swvmPjTQHaCt1nYJDld1Hz1fX03HN-Xdw-R2fHFXOk7YUFLb1lRJDqAJyEq1daWFrRkRrdBOVhUVWHOtG9cw6lTDhG4sqetK5Zlwlu2js-Xe9xg-5pAGM_PJQdfZHsI8GSKFYpTKCmf09A_6Fuaxz9dliqmqkoLwTJ2sqHk9g8a8Rz_LTs36RRmQS8DFkFKE1jg_LJznl_jOEGx-sjA5C_OThVllkYX4j3C9-x_J8VLiAeAXl1oyLiX7Bh6Lkfk
CODEN IIPRE4
CitedBy_id crossref_primary_10_1007_s11042_018_6045_y
crossref_primary_10_1109_TCYB_2016_2545693
crossref_primary_10_1016_j_neucom_2020_03_101
crossref_primary_10_1186_s41074_017_0029_0
crossref_primary_10_1049_iet_bmt_2017_0151
crossref_primary_10_1007_s11042_021_11107_4
crossref_primary_10_1007_s11227_023_05143_0
crossref_primary_10_1049_iet_ipr_2018_6566
crossref_primary_10_1109_TIP_2021_3055936
crossref_primary_10_1016_j_neucom_2017_10_049
crossref_primary_10_3390_app13042084
crossref_primary_10_1109_TMM_2019_2900134
crossref_primary_10_1007_s11042_020_09777_7
crossref_primary_10_1007_s12021_018_9362_4
crossref_primary_10_1109_TBIOM_2022_3174559
crossref_primary_10_1016_j_compag_2019_104944
crossref_primary_10_1016_j_jvcir_2021_103093
crossref_primary_10_1007_s00521_019_04256_z
crossref_primary_10_1109_ACCESS_2024_3513541
crossref_primary_10_1049_iet_bmt_2018_5063
crossref_primary_10_1016_j_jvcir_2024_104322
crossref_primary_10_1007_s11042_019_7712_3
crossref_primary_10_1016_j_neucom_2016_10_054
crossref_primary_10_1007_s00500_016_2108_z
crossref_primary_10_1109_TIM_2017_2789078
crossref_primary_10_1016_j_wpi_2021_102040
crossref_primary_10_1109_ACCESS_2018_2879896
crossref_primary_10_1016_j_patcog_2017_01_003
crossref_primary_10_1007_s11042_018_5722_1
crossref_primary_10_3169_itej_70_706
crossref_primary_10_1109_TIFS_2017_2738611
crossref_primary_10_1109_TIM_2023_3315405
crossref_primary_10_1109_TCBB_2019_2951146
crossref_primary_10_1186_s41074_018_0041_z
crossref_primary_10_1186_s41074_018_0046_7
crossref_primary_10_1145_3393619
crossref_primary_10_1016_j_neucom_2016_08_002
crossref_primary_10_32604_cmc_2024_050018
crossref_primary_10_1016_j_jvcir_2024_104139
crossref_primary_10_1016_j_patcog_2016_05_030
crossref_primary_10_1109_TPAMI_2019_2960509
crossref_primary_10_1007_s11042_019_08509_w
crossref_primary_10_1007_s40747_022_00771_0
crossref_primary_10_1364_JOSAA_499933
crossref_primary_10_1111_1556_4029_15214
crossref_primary_10_1016_j_patcog_2019_01_017
crossref_primary_10_1109_ACCESS_2020_2997814
crossref_primary_10_3390_s20061646
crossref_primary_10_1109_TCSVT_2020_2975671
crossref_primary_10_1016_j_jfranklin_2019_12_041
crossref_primary_10_1016_j_jvcir_2016_05_020
crossref_primary_10_1109_TBIOM_2022_3216857
crossref_primary_10_1109_TPAMI_2022_3183288
crossref_primary_10_1016_j_legalmed_2016_02_001
crossref_primary_10_1109_ACCESS_2024_3482430
crossref_primary_10_1109_TCSVT_2017_2760835
crossref_primary_10_1371_journal_pone_0214389
crossref_primary_10_1016_j_cosrev_2021_100432
crossref_primary_10_1109_TIP_2016_2612823
crossref_primary_10_1016_j_patrec_2017_10_033
crossref_primary_10_1186_s41074_019_0061_3
crossref_primary_10_1007_s11042_021_10941_w
crossref_primary_10_1007_s11831_019_09375_3
crossref_primary_10_1007_s11042_022_12751_0
crossref_primary_10_1049_iet_bmt_2020_0103
crossref_primary_10_1109_TIP_2019_2894362
Cites_doi 10.1109/CVPR.2010.5540113
10.1002/aja.1001200104
10.1007/s10791-009-9109-9
10.1109/AVSS.2003.1217914
10.1109/EST.2010.19
10.1109/TPAMI.2006.38
10.1109/TSMCC.2005.848181
10.1016/j.patrec.2009.11.006
10.1016/j.imavis.2008.11.009
10.1109/ICPR.2010.535
10.1109/ICB.2012.6199832
10.1109/CVPR.2001.990506
10.1109/LSP.2011.2157143
10.1016/j.imavis.2008.11.008
10.1109/IROS.2005.1545167
10.1109/TIFS.2012.2204253
10.1109/ICCVW.2009.5457587
10.1109/TPAMI.1983.4767367
10.1016/j.patcog.2010.10.011
10.1016/j.patcog.2003.09.012
10.1007/978-0-387-71041-9
10.1016/j.patcog.2009.12.020
10.1109/BTAS.2012.6374561
10.1109/TSMCB.2009.2031091
10.1109/AFGR.2004.1301503
10.1109/CVPR.2001.990508
10.1109/TCSVT.2012.2186744
10.1109/TPAMI.2005.39
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jan 2015
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jan 2015
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2014.2371335
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database

MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 154
ExternalDocumentID 3531145381
25423652
10_1109_TIP_2014_2371335
6963466
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Grant-in-Aid for Scientific Research (S) through the Japan Society for the Promotion of Science
  grantid: 21220003
– fundername: Research and Development Program for Implementation of AntiCrime and AntiTerrorism Technologies for a Safe and Secure Society
– fundername: Funds for Integrated Promotion of Social System Reform and Research and Development through the Ministry of Education, Culture, Sports, Science and Technology
– fundername: Japan Science and Technology Agency CREST Project entitled Behavior Understanding Based on Intention-Gait Model
– fundername: Japanese Government
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c413t-2afb2764ee91e687fb895ab315f59c6882509499dcd32c7d359da1bb8799d5ca3
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Fri Jul 11 04:42:44 EDT 2025
Mon Jun 30 10:15:57 EDT 2025
Thu Apr 03 06:51:44 EDT 2025
Thu Apr 24 22:58:12 EDT 2025
Tue Jul 01 02:03:01 EDT 2025
Tue Aug 26 16:40:40 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Gait recognition
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c413t-2afb2764ee91e687fb895ab315f59c6882509499dcd32c7d359da1bb8799d5ca3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMID 25423652
PQID 1637886514
PQPubID 85429
PageCount 15
ParticipantIDs pubmed_primary_25423652
proquest_miscellaneous_1657322680
crossref_citationtrail_10_1109_TIP_2014_2371335
crossref_primary_10_1109_TIP_2014_2371335
proquest_journals_1637886514
ieee_primary_6963466
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2015-Jan.
2015-1-00
2015-Jan
20150101
PublicationDateYYYYMMDD 2015-01-01
PublicationDate_xml – month: 01
  year: 2015
  text: 2015-Jan.
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2015
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
makihara (ref34) 2008
ref12
ref37
ref15
ref36
ref14
ref30
martín-félez (ref31) 2012; 7572
ref11
ref32
ref10
ref2
ref17
ref38
ref16
ref19
utsumi (ref33) 2004; 2
ref24
ref23
ref26
makihara (ref4) 2006
ref25
ref20
lee (ref18) 2002
han (ref22) 2005; 3
ref21
muramatsu (ref27) 2014
ref28
ref29
makihara (ref6) 2010
ref8
ref7
ref9
yang (ref13) 2006
ref3
ref5
nixon (ref1) 2005
References_xml – start-page: 222
  year: 2014
  ident: ref27
  article-title: Are intermediate views beneficial for gait recognition using a view transformation model?
  publication-title: Proceedings of Korea-Japan Joint Workshop on Frontiers of Computer Vision
– ident: ref25
  doi: 10.1109/CVPR.2010.5540113
– volume: 7572
  start-page: 328
  year: 2012
  ident: ref31
  article-title: Gait recognition by ranking
  publication-title: Computer Vision
– volume: 3
  start-page: iii-297
  year: 2005
  ident: ref22
  article-title: A study on view-insensitive gait recognition
  publication-title: Proc IEEE Int Conf Image Process
– ident: ref35
  doi: 10.1002/aja.1001200104
– ident: ref32
  doi: 10.1007/s10791-009-9109-9
– volume: 2
  start-page: 794
  year: 2004
  ident: ref33
  article-title: Adaptation of appearance model for human tracking using geometrical pixel value distributions
  publication-title: Proc 6th Asian Conf Comput Vis
– ident: ref3
  doi: 10.1109/AVSS.2003.1217914
– ident: ref20
  doi: 10.1109/EST.2010.19
– ident: ref9
  doi: 10.1109/TPAMI.2006.38
– ident: ref38
  doi: 10.1109/TSMCC.2005.848181
– ident: ref29
  doi: 10.1016/j.patrec.2009.11.006
– ident: ref21
  doi: 10.1016/j.imavis.2008.11.009
– ident: ref24
  doi: 10.1109/ICPR.2010.535
– ident: ref15
  doi: 10.1109/ICB.2012.6199832
– ident: ref11
  doi: 10.1109/CVPR.2001.990506
– ident: ref30
  doi: 10.1109/LSP.2011.2157143
– year: 2002
  ident: ref18
  article-title: Gait analysis for classification
– ident: ref19
  doi: 10.1016/j.imavis.2008.11.008
– ident: ref36
  doi: 10.1109/IROS.2005.1545167
– ident: ref28
  doi: 10.1109/TIFS.2012.2204253
– ident: ref23
  doi: 10.1109/ICCVW.2009.5457587
– ident: ref37
  doi: 10.1109/TPAMI.1983.4767367
– ident: ref10
  doi: 10.1016/j.patcog.2010.10.011
– ident: ref12
  doi: 10.1016/j.patcog.2003.09.012
– ident: ref2
  doi: 10.1007/978-0-387-71041-9
– start-page: 151
  year: 2006
  ident: ref4
  article-title: Gait recognition using a view transformation model in the frequency domain
  publication-title: Proc 9th Eur Conf Comput Vis
– ident: ref7
  doi: 10.1016/j.patcog.2009.12.020
– ident: ref26
  doi: 10.1109/BTAS.2012.6374561
– ident: ref14
  doi: 10.1109/TSMCB.2009.2031091
– ident: ref16
  doi: 10.1109/AFGR.2004.1301503
– start-page: 1
  year: 2008
  ident: ref34
  article-title: Silhouette extraction based on iterative spatio-temporal local color transformation and graph-cut segmentation
  publication-title: Proc 19th Int Conf Pattern Recognit
– start-page: 717
  year: 2010
  ident: ref6
  article-title: Silhouette transformation based on walking speed for gait identification
  publication-title: Proc IEEE Conf Comp Vis Pattern Recognit
– start-page: 619
  year: 2006
  ident: ref13
  article-title: Reconstruction of 3D human body pose for gait recognition
  publication-title: Proc IAPR Int Conf Biometrics
– ident: ref17
  doi: 10.1109/CVPR.2001.990508
– ident: ref5
  doi: 10.1109/TCSVT.2012.2186744
– ident: ref8
  doi: 10.1109/TPAMI.2005.39
– year: 2005
  ident: ref1
  publication-title: Human Identification Based on Gait
SSID ssj0014516
Score 2.4636261
Snippet Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 140
SubjectTerms Accuracy
Cameras
Feature extraction
Image sequences
Three-dimensional displays
Training
Visualization
Title Gait-Based Person Recognition Using Arbitrary View Transformation Model
URI https://ieeexplore.ieee.org/document/6963466
https://www.ncbi.nlm.nih.gov/pubmed/25423652
https://www.proquest.com/docview/1637886514
https://www.proquest.com/docview/1657322680
Volume 24
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PcGB0pZHoKAgcUEiu4nj57FFfYBUVKEt6i2y47G0otpFNCskfj1j56GCAHGzkonjeMaZbzzjGYDXytkSfcDCS3RkoKAsrFSm0Iw7XjkeVIgb-hcf5fkV_3Atrrfg7XQWBhFT8BnOYjP58v263cStsrkkaeFSbsM2GW79Wa3JYxALzibPplCFItg_uiRLM1-8v4wxXHzG6miSxWI1ZBaxWgr2izZK5VX-jjSTxjndhYtxrH2gyZfZpnOz9sdvaRz_92MewoMBeuZHvazswRau9mF3gKH5sMhv9-H-nRyFB3B2ZpddcUzKzueXCZ7nn8agI2qnkAPq0y3T-f388xK_54s7aJhoYr21m0dwdXqyeHdeDNUXipYUW1cwGxxTkiOaCqVWwWkjrKsrEYRpJSHzmHvPGN_6mrXK18J4WzmnFV0Tra0fw85qvcKnkAdd-sCcsYIrrmWlUZsqEHSwrJRoMIP5yIWmHVKTxwoZN00yUUrTEAubyMJmYGEGb6YnvvZpOf5BexBnf6IbJj6Dw5HRzbBubxtCpyRaklBkBq-m27TiohvFrnC9iTRC0W9Q6jKDJ72ATH2PcvXsz-98DvdoZKLfwjmEne7bBl8QqOncyyTNPwEkmu9o
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcgAOFFoKgQJB4oJEdvPw8wiIdgvdqkJb1Ftkx2NpRbWLaFZI_HrGzkMFAeJmJRPH8YzjbzwvgJfSmhydx8wJtKSgoMiMkDpTJbOssMxLHw7056dids4-XPCLLXg9xsIgYnQ-w0loRlu-WzebcFQ2FSQtTIgbcJOHYNwuWmu0GYSSs9G2yWUmCfgPRslcTxfHZ8GLi03KKihloVwNKUZlJXj5y34UC6z8HWvGPedwB-bDaDtXky-TTWsnzY_fEjn-7-fcg7s9-EzfdNJyH7ZwtQs7PRBN-2V-tQt3rmUp3IOjI7Nss7e03bn0LAL09NPgdkTt6HRAfdpljOBPPy_xe7q4hoeJJlRcu3wA54fvF-9mWV9_IWtoa2uz0nhbSsEQdYFCSW-V5sZWBfdcN4Kweci-p7VrXFU20lVcO1NYqyRd442p9mF7tV7hI0i9yp0vrTacSaZEoVDpwhN4MGUuUGMC04ELddMnJw81Mi7rqKTkuiYW1oGFdc_CBF6NT3ztEnP8g3YvzP5I1098AgcDo-t-5V7VhE-lUoJwZAIvxtu05oIhxaxwvQk0XNKPUKg8gYedgIx9D3L1-M_vfA63Zov5SX1yfPrxCdymUfLuQOcAtttvG3xKEKe1z6Jk_wTIgfKw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Gait-based+person+recognition+using+arbitrary+view+transformation+model&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Muramatsu%2C+Daigo&rft.au=Shiraishi%2C+Akira&rft.au=Makihara%2C+Yasushi&rft.au=Uddin%2C+Md+Zasim&rft.date=2015-01-01&rft.issn=1941-0042&rft.eissn=1941-0042&rft.volume=24&rft.issue=1&rft.spage=140&rft_id=info:doi/10.1109%2FTIP.2014.2371335&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon