ViT-PGC: vision transformer for pedestrian gender classification on small-size dataset

Pedestrian gender classification (PGC) is a key task in full-body-based pedestrian image analysis and has become an important area in applications like content-based image retrieval, visual surveillance, smart city, and demographic collection. In the last decade, convolutional neural networks (CNN)...

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 26; no. 4; pp. 1805 - 1819
Main Authors Abbas, Farhat, Yasmin, Mussarat, Fayyaz, Muhammad, Asim, Usman
Format Journal Article
LanguageEnglish
Published London Springer London 01.11.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1433-7541
1433-755X
DOI10.1007/s10044-023-01196-2

Cover

Loading…
Abstract Pedestrian gender classification (PGC) is a key task in full-body-based pedestrian image analysis and has become an important area in applications like content-based image retrieval, visual surveillance, smart city, and demographic collection. In the last decade, convolutional neural networks (CNN) have appeared with great potential and with reliable choices for vision tasks, such as object classification, recognition, detection, etc. But CNN has a limited local receptive field that prevents them from learning information about the global context. In contrast, a vision transformer (ViT) is a better alternative to CNN because it utilizes a self-attention mechanism to attend to a different patch of an input image. In this work, generic and effective modules such as locality self-attention (LSA), and shifted patch tokenization (SPT)-based vision transformer model are explored for the PGC task. With the use of these modules in ViT, it is successfully able to learn from stretch even on small-size (SS) datasets and overcome the lack of locality inductive bias. Through extensive experimentation, we found that the proposed ViT model produced better results in terms of overall and mean accuracies. The better results confirm that ViT outperformed state-of-the-art (SOTA) PGC methods.
AbstractList Pedestrian gender classification (PGC) is a key task in full-body-based pedestrian image analysis and has become an important area in applications like content-based image retrieval, visual surveillance, smart city, and demographic collection. In the last decade, convolutional neural networks (CNN) have appeared with great potential and with reliable choices for vision tasks, such as object classification, recognition, detection, etc. But CNN has a limited local receptive field that prevents them from learning information about the global context. In contrast, a vision transformer (ViT) is a better alternative to CNN because it utilizes a self-attention mechanism to attend to a different patch of an input image. In this work, generic and effective modules such as locality self-attention (LSA), and shifted patch tokenization (SPT)-based vision transformer model are explored for the PGC task. With the use of these modules in ViT, it is successfully able to learn from stretch even on small-size (SS) datasets and overcome the lack of locality inductive bias. Through extensive experimentation, we found that the proposed ViT model produced better results in terms of overall and mean accuracies. The better results confirm that ViT outperformed state-of-the-art (SOTA) PGC methods.
Author Yasmin, Mussarat
Asim, Usman
Fayyaz, Muhammad
Abbas, Farhat
Author_xml – sequence: 1
  givenname: Farhat
  orcidid: 0000-0001-6182-3244
  surname: Abbas
  fullname: Abbas, Farhat
  email: farhatabbas421@gmail.com
  organization: Department of Computer Science, COMSATS University Islamabad
– sequence: 2
  givenname: Mussarat
  surname: Yasmin
  fullname: Yasmin, Mussarat
  organization: Department of Computer Science, COMSATS University Islamabad
– sequence: 3
  givenname: Muhammad
  surname: Fayyaz
  fullname: Fayyaz, Muhammad
  organization: Department of Computer Science, FAST-National University of Computer and Emerging Sciences (NUCES)
– sequence: 4
  givenname: Usman
  surname: Asim
  fullname: Asim, Usman
  organization: DeltaX
BookMark eNp9kE1LAzEQQINUsK3-AU8LnqP53M16k6JVKOihFm8hzWZLyjZbM1tBf71pVxQ8FIaZEOZlJm-EBqENDqFLSq4pIcUNpCwEJoxjQmmZY3aChlRwjgsp3wa_Z0HP0AhgTQjnnKkhWiz8HL9MJ7fZhwffhqyLJkDdxo2LWSrZ1lUOuuhNyFYuVOnWNgbA196abg-kgI1pGgz-y2WV6Qy47hyd1qYBd_FTx-j14X4-ecSz5-nT5G6GLadlh2WhBBOKSVVVjohKKWVr5WReW1s5JurcSEaYkpK4MpekWMrcLJdKlso6mks-Rlf9u9vYvu_Sonrd7mJIIzVTJeOkoKVKXazvsrEFiK7W2-g3Jn5qSvTen-796eRPH_xpliD1D7K-O3w5KfLNcZT3KKQ5YeXi31ZHqG_1uoZn
CitedBy_id crossref_primary_10_3390_s25061736
crossref_primary_10_1007_s10044_024_01406_5
crossref_primary_10_3390_fractalfract8100551
Cites_doi 10.1145/2733373.2806332
10.1049/cp.2017.0102
10.1016/j.comcom.2021.09.001
10.1117/12.2280487
10.1007/s10044-018-0688-1
10.1007/s11760-022-02217-z
10.1007/978-3-319-57421-9_17
10.1007/s10044-018-0725-0
10.1016/j.patrec.2017.07.007
10.1109/ICCV48922.2021.01007
10.1117/12.2077079
10.1007/s00521-020-05071-7
10.1007/s42452-021-04881-1
10.1007/s12652-019-01267-5
10.48161/qaj.v1n2a63
10.1145/2647868.2654966
10.1109/TIP.2019.2891888
10.1109/ACCESS.2018.2889797
10.1109/CVPR.2016.90
10.1186/s41074-018-0048-5
10.1109/ICCV.2019.00338
10.1038/s41598-021-95218-w
10.1109/ICCV.2015.384
10.1109/ICB45273.2019.8987245
10.1007/978-3-030-87237-3_5
10.1088/1742-6596/1813/1/012051
10.1109/CVPR.2017.360
10.32010/26166127.2021.4.1.60.90
10.1002/ima.22812
10.1007/s00371-020-01814-8
10.1109/ISPACS.2017.8265639
10.1007/s00521-020-05015-1
10.1109/ICCV48922.2021.01172
10.1016/j.future.2018.05.002
10.1016/j.apenergy.2021.117912
10.1016/j.patcog.2017.06.011
10.1007/s00521-018-3754-0
10.1016/j.eswa.2021.116288
10.1109/CVPR46437.2021.01625
10.1609/aaai.v36i2.20103
10.1016/j.patrec.2018.01.010
10.1007/978-3-642-12297-2_23
10.1016/j.asoc.2018.05.012
10.1109/ICCASIT50869.2020.9368658
10.1109/ICCVW.2009.5457467
10.1016/j.irbm.2019.10.006
10.1007/978-3-642-39065-4_67
10.1109/AVSS.2017.8078525
10.1007/s12652-020-02750-0
10.1016/j.compag.2018.10.013
10.1609/aaai.v31i1.11231
10.1007/s00521-021-06652-w
10.1038/s41598-020-59108-x
10.1080/0952813X.2019.1572657
10.3390/math9192499
10.1504/IJBM.2016.082604
10.1109/CVPR.2015.7298594
10.1109/AIMS.2013.13
10.1007/s11042-018-7031-0
10.1109/TPAMI.2017.2669035
10.1016/j.jfranklin.2017.09.003
10.1109/JIOT.2020.3021763
10.1109/ICCV48922.2021.00986
10.1109/ICCV48922.2021.00060
10.1109/ICCV48922.2021.00009
10.1016/j.measurement.2019.01.041
10.1109/CAC.2018.8623118
10.1145/1459359.1459470
10.1007/s10044-015-0499-6
10.1109/ICCV.2017.97
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s10044-023-01196-2
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1433-755X
EndPage 1819
ExternalDocumentID 10_1007_s10044_023_01196_2
GroupedDBID -59
-5G
-BR
-EM
-Y2
-~C
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
203
29O
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5VS
67Z
6NX
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACSNA
ACZOJ
ADHHG
ADHIR
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEBTG
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFGCZ
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BDATZ
BGNMA
BSONS
CAG
COF
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
IWAJR
IXC
IXD
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
J9A
JBSCW
JCJTX
JZLTJ
KDC
KOV
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NPVJJ
NQJWS
NU0
O9-
O93
O9J
OAM
P2P
P9O
PF0
PT4
PT5
QOS
R89
R9I
RIG
RNI
ROL
RPX
RSV
RZK
S16
S1Z
S27
S3B
SAP
SCO
SDH
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7X
Z81
Z83
Z88
ZMTXR
~A9
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ABRTQ
ID FETCH-LOGICAL-c319t-5784248258dde04d888cf8e56fccde24f6a52028550e96507b56abb8598ce1653
IEDL.DBID U2A
ISSN 1433-7541
IngestDate Sat Jul 26 00:43:46 EDT 2025
Thu Apr 24 23:03:14 EDT 2025
Tue Jul 01 01:15:18 EDT 2025
Fri Feb 21 02:44:56 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords Vision transformer
Pedestrian gender classification
Deep CNN models
LSA and SPT
SS datasets
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-5784248258dde04d888cf8e56fccde24f6a52028550e96507b56abb8598ce1653
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6182-3244
PQID 2892307198
PQPubID 2043691
PageCount 15
ParticipantIDs proquest_journals_2892307198
crossref_primary_10_1007_s10044_023_01196_2
crossref_citationtrail_10_1007_s10044_023_01196_2
springer_journals_10_1007_s10044_023_01196_2
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-11-01
PublicationDateYYYYMMDD 2023-11-01
PublicationDate_xml – month: 11
  year: 2023
  text: 2023-11-01
  day: 01
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: Heidelberg
PublicationTitle Pattern analysis and applications : PAA
PublicationTitleAbbrev Pattern Anal Applic
PublicationYear 2023
Publisher Springer London
Springer Nature B.V
Publisher_xml – name: Springer London
– name: Springer Nature B.V
References DongHZhangLZouBExploring vision transformers for polarimetric SAR image classificationIEEE Trans Geosci Remote Sens202160115
SindagiVAPatelVMA survey of recent advances in cnn-based single image crowd counting and density estimationPattern Recogn Lett2018107316
Azzopardi G, Greco A, Saggese A, Vento M (2017) Fast gender recognition in videos using a novel descriptor based on the gradient magnitudes of facial landmarks. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). 1–6.
Antipov G, Berrani S-A, Ruchaud N, Dugelay J-L (2015) Learned vs. hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on Multimedia 1263–1266.
Sun C, Shrivastava A, Singh S, Gupta A (2017) Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE international conference on computer vision 843–852.
NogayHSAkinciTCYilmazMDetection of invisible cracks in ceramic materials using by pre-trained deep convolutional neural networkNeural Comput Appl20223414231432
YoshihashiRTrinhTTKawakamiRYouSIidaMNaemuraTPedestrian detection with motion features via two-stream ConvNetsIPSJ Trans Compute Vis Appl20181012
SalihBMAbdulazeezAMHassanOMSGender classification based on iris recognition using artificial neural networksQubahan Acad J20211156163
LuTHanBChenLYuFXueCA generic intelligent tomato classification system for practical applications using DenseNet-201 with transfer learningSci Rep20211115824
Zhao C, Wang X, Wong WK, Zheng W, Yang J, Miao D (2017) Multiple metric learning based on bar-shape descriptor for person re-identification. Pattern Recog
Gornale S, Basavanna M, Kruti R (2017) Fingerprint based gender classification using local binary pattern. Int J Comput Intell Res ISSN, 0973–1873
Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision 10012-10022
AbbasFYasminMFayyazMAbd ElazizMLuSEl-LatifAAAGender classification using proposed CNN-based model and ant colony optimizationMathematics2021924992021
Tapia J, Arellano C (2019) Gender classification from Iris texture images using a new set of binary statistical image features. In: 2019 International Conference on Biometrics (ICB). 1-7.
Lee SH, Lee S, Song BC (2021) Vision transformer for small-size datasets. arXiv preprint arXiv:2112.13492
CaiLZhuJZengHChenJCaiCMaK-KHog-assisted deep feature learning for pedestrian gender recognitionJ Franklin Inst201835519912008
CıbukMBudakUGuoYInceMCSengurAEfficient deep features selections and classification for flower species recognitionMeasurement2019137713
AbbasFYasminMFayyazMElazizMALuSEl-LatifAAAGender classification using proposed cnn-based model and ant colony optimizationMathematics202192499
BrownTMannBRyderNSubbiahMKaplanJDDhariwalPLanguage models are few-shot learnersAdv Neural Inf Process Syst20203318771901
Xu J, Luo L, Deng C, Huang H (2018) Bilevel distance metric learning for robust image recognition. In: Advances in Neural Information Processing Systems 4198–4207.
Srinivas A, Lin T-Y, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 16519–16529.
GeelenCDWijnhovenRGDubbelmanGGender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMsVideo Surv Transp Imaging Appl20152015170183
LeeMLeeJ-HKimD-HGender recognition using optimal gait feature based on recursive feature elimination in normal walkingExpert Syst Appl2022189
Cai L, Zeng H, Zhu J, Cao J, Hou J, Cai C (2017) Multi-view joint learning network for pedestrian gender classification. In: 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) 23-27.
Xu Z, Sun K, Mao J (2020) Research on ResNet101 network chemical reagent label image classification based on transfer learning. In: 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT) 354–358.
Ba JL, Kiros JR, Hinton GE (2016) Layer normalization. arXiv preprint arXiv:1607.06450.
Ng CB, Tay YH, Goi BM (2017) Training strategy for convolutional neural networks in pedestrian gender classification. In: Second International Workshop on Pattern Recognition 10443: 226-230. SPIE.
Bhojanapalli S, Chakrabarti A, Glasner D, Li D, Unterthiner T, Veit A (2021) Understanding robustness of transformers for image classification. In: Proceedings of the IEEE/CVF international conference on computer vision. 10231–10241.
NgC-BTayY-HGoiB-MPedestrian gender classification using combined global and local parts-based convolutional neural networksPattern Anal Appl201922146914804009731
KhanMAAkramTSharifMAwaisMJavedKAliHCCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN featuresComput Electron Agric2018155220236
LinFWuYZhuangYLongXXuWHuman gender classification: a reviewInt J Biomet20168275300
Yaghoubi E, Alirezazadeh P, Assunção E, Neves JC, Proençaã H (2019) Region-based cnns for pedestrian gender recognition in visual surveillance environments. In: 2019 International Conference of the Biometrics Special Interest Group (BIOSIG) 1-5.
Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. arXiv preprint arXiv:1904.09925.
FayyazMYasminMSharifMRazaMJ-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classificationNeural Comput Appl202133361391
Yuan L, Chen Y, Wang T, Yu W, Shi Y, Jiang Z, et al. (2021) Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. In: 2021 IEEE, in CVF International Conference on Computer Vision, ICCV 538-547.
Collins M, Zhang J, Miller P, Wang H (2009) Full body image feature representations for gender profiling. In: 2009 IEEE 12th International conference on computer vision workshops, ICCV workshops. 1235-1242.
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition 770-778.
ZhangMSuHWenJClassification of flower image based on attention mechanism and multi-loss attention networkComput Commun2021179307317
Cai L, Zhu J, Zeng H, Chen J, Cai C (2018) Deep-learned and hand-crafted features fusion network for pedestrian gender recognition. In: Proceedings of ELM-2016, Springer. 207–215.
AhadMFayyazMPedestrian gender recognition with handcrafted feature ensemblesAzerbaijan J High Perform Comput2021416090
Ng C-B, Tay Y-H, Goi B-M (2013) A convolutional neural network for pedestrian gender recognition. In: International symposium on neural networks 558–564.
SharifMAttique KhanMRashidMYasminMAfzaFTanikUJDeep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy imagesJ Exp Theore Artif Intell2019334577599
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proc Syst 25
Raza M, Zonghai C, Rehman SU, Zhenhua G, Jikai W, Peng B (2017) Part-wise pedestrian gender recognition via deep convolutional neural networks
Liu T, Ye X, Sun (2018) Combining convolutional neural network and support vector machine for gait-based gender recognition. In: 2018 Chinese Automation Congress (CAC) 3477-3481.
Deng Y, Luo P, Loy CC, Tang X (2014)Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM international conference on Multimedia 789-792.
ShaheedKMaoAQureshiIKumarMHussainSUllahIDS-CNN: A pre-trained Xception model based on depth-wise separable convolutional neural network for finger vein recognitionExpert Syst Appl2022191
Wu H, Xiao B, Codella N, Liu M, Dai X, Yuan L, Zhang L (2021) Cvt: Introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 22-31).
YuanBHanLGuXYanHMulti-deep features fusion for high-resolution remote sensing image scene classificationNeural Comput Appl20213320472063
SunYZhangMSunZTanTDemographic analysis from biometric data: achievements, challenges, and new frontiersIEEE Trans Pattern Anal Mach Intell201740332351
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
Guo G, Mu G, Fu Y (2009) Gender from body: a biologically-inspired approach with manifold learning. In: Asian conference on computer vision 236–245.
AhmedKSainiMFCML-gait: fog computing and machine learning inspired human identity and gender recognition using gait sequencesSignal, Image Video Proc2022174925936
Xiao T, Li S, Wang B, Lin L, Wang X (2017) Joint detection and identification feature learning for person search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3415–3424.
Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1-9
Ng C-B, Tay Y-H, Goi B-M (2013) Comparing image representations for training a convolutional neural network to classify gender. In: 2013 1st International Conference on Artificial Intelligence, Modelling and Simulatio 29–33.
GrecoASaggeseAVentoMVigilanteVGender recognition in the wild: a robustness evaluation over corrupted imagesJ Ambient Intell Humaniz Comput2021121046110472
RazaMSharifMYasminMKhanMASabaTFernandesSLAppearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learningFutur Gener Comput Syst2018882839
Y
Y-L He (1196_CR73) 2018; 70
M Raza (1196_CR47) 2018; 88
1196_CR89
1196_CR87
1196_CR88
R Yoshihashi (1196_CR2) 2018; 10
HS Nogay (1196_CR32) 2022; 34
VA Sindagi (1196_CR30) 2018; 107
F Abbas (1196_CR49) 2021; 9
1196_CR41
BM Salih (1196_CR11) 2021; 1
1196_CR42
MA Khan (1196_CR3) 2018; 22
1196_CR40
1196_CR84
T Brown (1196_CR60) 2020; 33
1196_CR80
1196_CR38
1196_CR39
1196_CR37
A Greco (1196_CR19) 2021; 12
M Ahad (1196_CR23) 2021; 4
R Cui (1196_CR31) 2019; 7
Q Zhang (1196_CR85) 2022; 4
1196_CR35
1196_CR79
K Shaheed (1196_CR33) 2022; 191
K Ahmed (1196_CR13) 2022; 17
S Gupta (1196_CR16) 2021; 37
C-B Ng (1196_CR54) 2019; 22
L Cai (1196_CR26) 2018; 355
M Rashid (1196_CR44) 2019; 78
V Carletti (1196_CR18) 2020; 11
1196_CR6
M Fayyaz (1196_CR7) 2021; 33
M Toğaçar (1196_CR76) 2020; 41
1196_CR8
M Lee (1196_CR14) 2022; 189
1196_CR1
L Cai (1196_CR27) 2020; 8
R Kruti (1196_CR10) 2019; 7
MA Khan (1196_CR45) 2018; 155
1196_CR75
1196_CR72
1196_CR9
1196_CR70
1196_CR71
M Zhang (1196_CR83) 2021; 179
1196_CR69
M Cıbuk (1196_CR78) 2019; 137
1196_CR67
1196_CR24
1196_CR68
1196_CR21
A Krishnaswamy Rangarajan (1196_CR34) 2020; 10
1196_CR65
H Acikgoz (1196_CR82) 2022; 305
1196_CR66
N Ahmadi (1196_CR17) 2020; 32
1196_CR29
T Lu (1196_CR81) 2021; 11
C-B Ng (1196_CR5) 2015; 18
1196_CR63
1196_CR20
CD Geelen (1196_CR28) 2015; 2015
1196_CR64
1196_CR61
1196_CR62
1196_CR58
HA Sanghvi (1196_CR86) 2023; 33
1196_CR15
F Abbas (1196_CR25) 2021; 9
1196_CR59
1196_CR12
H Dong (1196_CR36) 2021; 60
1196_CR56
1196_CR57
Y Sun (1196_CR22) 2017; 40
1196_CR55
C Li (1196_CR43) 2018; 104
M Sharif (1196_CR46) 2019; 33
B Yuan (1196_CR77) 2021; 33
H Yao (1196_CR4) 2019; 28
1196_CR52
1196_CR53
L Cai (1196_CR48) 2020; 8
1196_CR50
1196_CR51
F Lin (1196_CR74) 2016; 8
1196_CR90
1196_CR91
References_xml – reference: Benz P, Ham S, Zhang C, Karjauv A, Kweon IS (2021) Adversarial robustness comparison of vision transformer and mlp-mixer to cnns. arXiv preprint arXiv:2110.02797
– reference: Xu Z, Sun K, Mao J (2020) Research on ResNet101 network chemical reagent label image classification based on transfer learning. In: 2020 IEEE 2nd International Conference on Civil Aviation Safety and Information Technology (ICCASIT) 354–358.
– reference: Krishnaswamy RangarajanAPurushothamanRDisease classification in eggplant using pre-trained VGG16 and MSVMSci Rep2020102322
– reference: YoshihashiRTrinhTTKawakamiRYouSIidaMNaemuraTPedestrian detection with motion features via two-stream ConvNetsIPSJ Trans Compute Vis Appl20181012
– reference: AhmedKSainiMFCML-gait: fog computing and machine learning inspired human identity and gender recognition using gait sequencesSignal, Image Video Proc2022174925936
– reference: Paul S, Chen P-Y (2022) Vision transformers are robust earners. In: Proceedings of the AAAI conference on Artificial Intelligence 2071–2081.
– reference: Heo B, Yun S, Han D, Chun S, Choe J, Oh SJ (2021) Rethinking spatial dimensions of vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision 11936–11945.
– reference: Sun C, Shrivastava A, Singh S, Gupta A (2017) Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE international conference on computer vision 843–852.
– reference: Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
– reference: Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision 10012-10022
– reference: Li S, Wang L, Li J, Yao Y (2021) Image classification algorithm based on improved AlexNet. J Phys: Conf Ser 012051.
– reference: Cao L, Dikmen M, Fu Y, Huang TS (2008) Gender recognition from body. In: Proceedings of the 16th ACM international conference on Multimedia. 725–728.
– reference: Xu J, Luo L, Deng C, Huang H (2018) Bilevel distance metric learning for robust image recognition. In: Advances in Neural Information Processing Systems 4198–4207.
– reference: RazaMSharifMYasminMKhanMASabaTFernandesSLAppearance based pedestrians’ gender recognition by employing stacked auto encoders in deep learningFutur Gener Comput Syst2018882839
– reference: Srinivas A, Lin T-Y, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 16519–16529.
– reference: GuptaSThakurKKumarM2D-human face recognition using SIFT and SURF descriptors of face’s feature regionsVis Comput202137447456
– reference: ZhangMSuHWenJClassification of flower image based on attention mechanism and multi-loss attention networkComput Commun2021179307317
– reference: Cai Z, Saberian M, Vasconcelos N (2015) Learning complexity-aware cascades for deep pedestrian detection. In: Proceedings of the IEEE international conference on computer vision. 3361–3369.
– reference: LiCGuoJPorikliFPangYLightennet: a convolutional neural network for weakly illuminated image enhancementPattern Recogn Lett20181041522
– reference: CıbukMBudakUGuoYInceMCSengurAEfficient deep features selections and classification for flower species recognitionMeasurement2019137713
– reference: LuTHanBChenLYuFXueCA generic intelligent tomato classification system for practical applications using DenseNet-201 with transfer learningSci Rep20211115824
– reference: He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition 770-778.
– reference: Deng Y, Luo P, Loy CC, Tang X (2014)Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM international conference on Multimedia 789-792.
– reference: ToğaçarMErgenBCömertZÖzyurtFA deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning modelsIrbm202041212222
– reference: Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H (2021) Training data-efficient image transformers and distillation through attention. In: International conference on machine learning 10347-10357. PMLR.
– reference: Gornale S, Basavanna M, Kruti R (2017) Fingerprint based gender classification using local binary pattern. Int J Comput Intell Res ISSN, 0973–1873
– reference: Ba JL, Kiros JR, Hinton GE (2016) Layer normalization. arXiv preprint arXiv:1607.06450.
– reference: CaiLZhuJZengHChenJCaiCMaK-KHog-assisted deep feature learning for pedestrian gender recognitionJ Franklin Inst201835519912008
– reference: Yuan L, Chen Y, Wang T, Yu W, Shi Y, Jiang Z, et al. (2021) Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. In: 2021 IEEE, in CVF International Conference on Computer Vision, ICCV 538-547.
– reference: CaiLZengHZhuJCaoJWangYMaK-KCascading scene and viewpoint feature learning for pedestrian gender recognitionIEEE Internet Things J2020830143026
– reference: AhmadiNAkbarizadehGIris tissue recognition based on GLDM feature extraction and hybrid MLPNN-ICA classifierNeural Comput Appl20203222672281
– reference: FayyazMYasminMSharifMRazaMJ-LDFR: joint low-level and deep neural network feature representations for pedestrian gender classificationNeural Comput Appl202133361391
– reference: LinFWuYZhuangYLongXXuWHuman gender classification: a reviewInt J Biomet20168275300
– reference: KhanMAAkramTSharifMAwaisMJavedKAliHCCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN featuresComput Electron Agric2018155220236
– reference: NgC-BTayY-HGoiB-MA review of facial gender recognitionPattern Anal Appl2015187397553397042
– reference: Ng C-B, Tay Y-H, Goi B-M (2013) Comparing image representations for training a convolutional neural network to classify gender. In: 2013 1st International Conference on Artificial Intelligence, Modelling and Simulatio 29–33.
– reference: YaoHZhangSHongRZhangYXuCTianQDeep representation learning with part loss for person re-identificationIEEE Trans Image Proc201928628602871393755107122873
– reference: Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
– reference: AcikgozHA novel approach based on integration of convolutional neural networks and deep feature selection for short-term solar radiation forecastingAppl Energy2022305
– reference: SanghviHAPatelRHAgarwalAGuptaSSawhneyVPandyaASA deep learning approach for classification of COVID and pneumonia using DenseNet-201Int J Imaging Syst Technol2023331838
– reference: Tapia J, Arellano C (2019) Gender classification from Iris texture images using a new set of binary statistical image features. In: 2019 International Conference on Biometrics (ICB). 1-7.
– reference: SunYZhangMSunZTanTDemographic analysis from biometric data: achievements, challenges, and new frontiersIEEE Trans Pattern Anal Mach Intell201740332351
– reference: Lee SH, Lee S, Song BC (2021) Vision transformer for small-size datasets. arXiv preprint arXiv:2112.13492
– reference: GrecoASaggeseAVentoMVigilanteVGender recognition in the wild: a robustness evaluation over corrupted imagesJ Ambient Intell Humaniz Comput2021121046110472
– reference: AbbasFYasminMFayyazMElazizMALuSEl-LatifAAAGender classification using proposed cnn-based model and ant colony optimizationMathematics202192499
– reference: Azzopardi G, Greco A, Saggese A, Vento M (2017) Fast gender recognition in videos using a novel descriptor based on the gradient magnitudes of facial landmarks. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). 1–6.
– reference: AhadMFayyazMPedestrian gender recognition with handcrafted feature ensemblesAzerbaijan J High Perform Comput2021416090
– reference: Zhao C, Wang X, Wong WK, Zheng W, Yang J, Miao D (2017) Multiple metric learning based on bar-shape descriptor for person re-identification. Pattern Recog
– reference: Xiao T, Li S, Wang B, Lin L, Wang X (2017) Joint detection and identification feature learning for person search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3415–3424.
– reference: Antipov G, Berrani S-A, Ruchaud N, Dugelay J-L (2015) Learned vs. hand-crafted features for pedestrian gender recognition. In: Proceedings of the 23rd ACM international conference on Multimedia 1263–1266.
– reference: DongHZhangLZouBExploring vision transformers for polarimetric SAR image classificationIEEE Trans Geosci Remote Sens202160115
– reference: Bello I, Zoph B, Vaswani A, Shlens J, Le QV (2019) Attention augmented convolutional networks. arXiv preprint arXiv:1904.09925.
– reference: HeY-LZhangX-LAoWHuangJZDetermining the optimal temperature parameter for Softmax function in reinforcement learningAppl Soft Comput2018708085
– reference: Cai L, Zhu J, Zeng H, Chen J, Cai C (2018) Deep-learned and hand-crafted features fusion network for pedestrian gender recognition. In: Proceedings of ELM-2016, Springer. 207–215.
– reference: RashidMKhanMASharifMRazaMSarfrazMMAfzaFObject detection and classification: a joint selection and fusion strategy of deep convolutional neural network and SIFT point featuresMultimed Tools Appl2019781575115777
– reference: CarlettiVGrecoASaggeseAVentoMAn effective real time gender recognition system for smart camerasJ Ambient Intell Humaniz Comput20201124072419
– reference: Collins M, Zhang J, Miller P, Wang H (2009) Full body image feature representations for gender profiling. In: 2009 IEEE 12th International conference on computer vision workshops, ICCV workshops. 1235-1242.
– reference: Emmadi SC, Aerra MR, Bantu S (2023) Performance Analysis of VGG-16 Deep Learning Model for COVID-19 Detection using Chest X-Ray Images. In: 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom) 1001-1007. IEEE.
– reference: ZhangQA novel ResNet101 model based on dense dilated convolution for image classificationSN Appl Sci20224113
– reference: Guo G, Mu G, Fu Y (2009) Gender from body: a biologically-inspired approach with manifold learning. In: Asian conference on computer vision 236–245.
– reference: Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. (2017) Attention is all you need. Adv Neural Inf Proc Syst 30
– reference: Raza M, Zonghai C, Rehman SU, Zhenhua G, Jikai W, Peng B (2017) Part-wise pedestrian gender recognition via deep convolutional neural networks
– reference: Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2016) Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261.
– reference: Ng CB, Tay YH, Goi BM (2017) Training strategy for convolutional neural networks in pedestrian gender classification. In: Second International Workshop on Pattern Recognition 10443: 226-230. SPIE.
– reference: Liu T, Ye X, Sun (2018) Combining convolutional neural network and support vector machine for gait-based gender recognition. In: 2018 Chinese Automation Congress (CAC) 3477-3481.
– reference: ShaheedKMaoAQureshiIKumarMHussainSUllahIDS-CNN: A pre-trained Xception model based on depth-wise separable convolutional neural network for finger vein recognitionExpert Syst Appl2022191
– reference: Yaghoubi E, Alirezazadeh P, Assunção E, Neves JC, Proençaã H (2019) Region-based cnns for pedestrian gender recognition in visual surveillance environments. In: 2019 International Conference of the Biometrics Special Interest Group (BIOSIG) 1-5.
– reference: Jie H, Li S, Gang S, Albanie S (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
– reference: YuanBHanLGuXYanHMulti-deep features fusion for high-resolution remote sensing image scene classificationNeural Comput Appl20213320472063
– reference: GeelenCDWijnhovenRGDubbelmanGGender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMsVideo Surv Transp Imaging Appl20152015170183
– reference: SindagiVAPatelVMA survey of recent advances in cnn-based single image crowd counting and density estimationPattern Recogn Lett2018107316
– reference: Wu H, Xiao B, Codella N, Liu M, Dai X, Yuan L, Zhang L (2021) Cvt: Introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 22-31).
– reference: NgC-BTayY-HGoiB-MPedestrian gender classification using combined global and local parts-based convolutional neural networksPattern Anal Appl201922146914804009731
– reference: Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556.
– reference: Bhojanapalli S, Chakrabarti A, Glasner D, Li D, Unterthiner T, Veit A (2021) Understanding robustness of transformers for image classification. In: Proceedings of the IEEE/CVF international conference on computer vision. 10231–10241.
– reference: KrutiRPatilAGornaleSFusion of local binary pattern and local phase quantization features set for gender classification using fingerprintsInt J Comput Sci Eng201972229
– reference: Ng C-B, Tay Y-H, Goi B-M (2013) A convolutional neural network for pedestrian gender recognition. In: International symposium on neural networks 558–564.
– reference: Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Proc Syst 25
– reference: KhanMAAkramTSharifMJavedMYMuhammadNYasminMAn implementation of optimized framework for action classification using multilayers neural network on selected fused featuresPattern Analy Appl201822137713974009725
– reference: CaiLZengHZhuJCaoJWangYMaKKCascading scene and viewpoint feature learning for pedestrian gender recognitionIEEE Internet Things J20208430143026
– reference: Cai L, Zeng H, Zhu J, Cao J, Hou J, Cai C (2017) Multi-view joint learning network for pedestrian gender classification. In: 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) 23-27.
– reference: Geelen CD, Wijnhoven RG, Dubbelman G (2015) Gender classification in low-resolution surveillance video: in-depth comparison of random forests and SVMs. In: Video Surveillance and Transportation Imaging Applications 9407: 170-183. SPIE.
– reference: SharifMAttique KhanMRashidMYasminMAfzaFTanikUJDeep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy imagesJ Exp Theore Artif Intell2019334577599
– reference: SalihBMAbdulazeezAMHassanOMSGender classification based on iris recognition using artificial neural networksQubahan Acad J20211156163
– reference: AbbasFYasminMFayyazMAbd ElazizMLuSEl-LatifAAAGender classification using proposed CNN-based model and ant colony optimizationMathematics2021924992021
– reference: BrownTMannBRyderNSubbiahMKaplanJDDhariwalPLanguage models are few-shot learnersAdv Neural Inf Process Syst20203318771901
– reference: NogayHSAkinciTCYilmazMDetection of invisible cracks in ceramic materials using by pre-trained deep convolutional neural networkNeural Comput Appl20223414231432
– reference: Yu S, Ma K, Bi Q, Bian C, Ning M, He N, Li Y, Liu H, Zheng Y (2021) Mil-vt: Multiple instance learning enhanced vision transformer for fundus image classification. InMedical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, Proceedings, Part VIII 24 2021 (pp. 45-54). Springer International Publishing.
– reference: LeeMLeeJ-HKimD-HGender recognition using optimal gait feature based on recursive feature elimination in normal walkingExpert Syst Appl2022189
– reference: Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1-9
– reference: CuiRHuaGZhuAWuJLiuHHard sample mining and learning for skeleton-based human action recognition and identificationIEEE Access2019782458257
– ident: 1196_CR53
  doi: 10.1145/2733373.2806332
– ident: 1196_CR39
– ident: 1196_CR64
– ident: 1196_CR29
  doi: 10.1049/cp.2017.0102
– volume: 179
  start-page: 307
  year: 2021
  ident: 1196_CR83
  publication-title: Comput Commun
  doi: 10.1016/j.comcom.2021.09.001
– ident: 1196_CR50
  doi: 10.1117/12.2280487
– volume: 22
  start-page: 1377
  year: 2018
  ident: 1196_CR3
  publication-title: Pattern Analy Appl
  doi: 10.1007/s10044-018-0688-1
– volume: 17
  start-page: 925
  issue: 4
  year: 2022
  ident: 1196_CR13
  publication-title: Signal, Image Video Proc
  doi: 10.1007/s11760-022-02217-z
– ident: 1196_CR8
  doi: 10.1007/978-3-319-57421-9_17
– volume: 22
  start-page: 1469
  year: 2019
  ident: 1196_CR54
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-018-0725-0
– ident: 1196_CR55
– volume: 107
  start-page: 3
  year: 2018
  ident: 1196_CR30
  publication-title: Pattern Recogn Lett
  doi: 10.1016/j.patrec.2017.07.007
– ident: 1196_CR35
  doi: 10.1109/ICCV48922.2021.01007
– ident: 1196_CR88
  doi: 10.1117/12.2077079
– ident: 1196_CR59
– volume: 33
  start-page: 2047
  year: 2021
  ident: 1196_CR77
  publication-title: Neural Comput Appl
  doi: 10.1007/s00521-020-05071-7
– volume: 4
  start-page: 1
  year: 2022
  ident: 1196_CR85
  publication-title: SN Appl Sci
  doi: 10.1007/s42452-021-04881-1
– volume: 33
  start-page: 1877
  year: 2020
  ident: 1196_CR60
  publication-title: Adv Neural Inf Process Syst
– volume: 11
  start-page: 2407
  year: 2020
  ident: 1196_CR18
  publication-title: J Ambient Intell Humaniz Comput
  doi: 10.1007/s12652-019-01267-5
– volume: 1
  start-page: 156
  year: 2021
  ident: 1196_CR11
  publication-title: Qubahan Acad J
  doi: 10.48161/qaj.v1n2a63
– ident: 1196_CR71
  doi: 10.1145/2647868.2654966
– volume: 28
  start-page: 2860
  issue: 6
  year: 2019
  ident: 1196_CR4
  publication-title: IEEE Trans Image Proc
  doi: 10.1109/TIP.2019.2891888
– volume: 7
  start-page: 8245
  year: 2019
  ident: 1196_CR31
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2889797
– ident: 1196_CR91
  doi: 10.1109/CVPR.2016.90
– volume: 10
  start-page: 12
  year: 2018
  ident: 1196_CR2
  publication-title: IPSJ Trans Compute Vis Appl
  doi: 10.1186/s41074-018-0048-5
– ident: 1196_CR40
– ident: 1196_CR63
  doi: 10.1109/ICCV.2019.00338
– volume: 11
  start-page: 15824
  year: 2021
  ident: 1196_CR81
  publication-title: Sci Rep
  doi: 10.1038/s41598-021-95218-w
– ident: 1196_CR1
  doi: 10.1109/ICCV.2015.384
– ident: 1196_CR12
  doi: 10.1109/ICB45273.2019.8987245
– ident: 1196_CR37
  doi: 10.1007/978-3-030-87237-3_5
– ident: 1196_CR79
  doi: 10.1088/1742-6596/1813/1/012051
– ident: 1196_CR56
  doi: 10.1109/CVPR.2017.360
– ident: 1196_CR58
– ident: 1196_CR9
– volume: 4
  start-page: 60
  issue: 1
  year: 2021
  ident: 1196_CR23
  publication-title: Azerbaijan J High Perform Comput
  doi: 10.32010/26166127.2021.4.1.60.90
– volume: 2015
  start-page: 170
  year: 2015
  ident: 1196_CR28
  publication-title: Video Surv Transp Imaging Appl
– ident: 1196_CR89
– volume: 33
  start-page: 18
  year: 2023
  ident: 1196_CR86
  publication-title: Int J Imaging Syst Technol
  doi: 10.1002/ima.22812
– volume: 37
  start-page: 447
  year: 2021
  ident: 1196_CR16
  publication-title: Vis Comput
  doi: 10.1007/s00371-020-01814-8
– ident: 1196_CR51
  doi: 10.1109/ISPACS.2017.8265639
– ident: 1196_CR66
– volume: 33
  start-page: 361
  year: 2021
  ident: 1196_CR7
  publication-title: Neural Comput Appl
  doi: 10.1007/s00521-020-05015-1
– ident: 1196_CR68
  doi: 10.1109/ICCV48922.2021.01172
– volume: 88
  start-page: 28
  year: 2018
  ident: 1196_CR47
  publication-title: Futur Gener Comput Syst
  doi: 10.1016/j.future.2018.05.002
– volume: 305
  year: 2022
  ident: 1196_CR82
  publication-title: Appl Energy
  doi: 10.1016/j.apenergy.2021.117912
– ident: 1196_CR87
  doi: 10.1016/j.patcog.2017.06.011
– volume: 32
  start-page: 2267
  year: 2020
  ident: 1196_CR17
  publication-title: Neural Comput Appl
  doi: 10.1007/s00521-018-3754-0
– volume: 191
  year: 2022
  ident: 1196_CR33
  publication-title: Expert Syst Appl
  doi: 10.1016/j.eswa.2021.116288
– ident: 1196_CR61
  doi: 10.1109/CVPR46437.2021.01625
– ident: 1196_CR38
  doi: 10.1609/aaai.v36i2.20103
– ident: 1196_CR62
– volume: 104
  start-page: 15
  year: 2018
  ident: 1196_CR43
  publication-title: Pattern Recogn Lett
  doi: 10.1016/j.patrec.2018.01.010
– ident: 1196_CR20
  doi: 10.1007/978-3-642-12297-2_23
– volume: 70
  start-page: 80
  year: 2018
  ident: 1196_CR73
  publication-title: Appl Soft Comput
  doi: 10.1016/j.asoc.2018.05.012
– ident: 1196_CR80
  doi: 10.1109/ICCASIT50869.2020.9368658
– ident: 1196_CR41
  doi: 10.1109/ICCVW.2009.5457467
– volume: 41
  start-page: 212
  year: 2020
  ident: 1196_CR76
  publication-title: Irbm
  doi: 10.1016/j.irbm.2019.10.006
– ident: 1196_CR24
  doi: 10.1007/978-3-642-39065-4_67
– ident: 1196_CR6
  doi: 10.1109/AVSS.2017.8078525
– volume: 12
  start-page: 10461
  year: 2021
  ident: 1196_CR19
  publication-title: J Ambient Intell Humaniz Comput
  doi: 10.1007/s12652-020-02750-0
– volume: 155
  start-page: 220
  year: 2018
  ident: 1196_CR45
  publication-title: Comput Electron Agric
  doi: 10.1016/j.compag.2018.10.013
– ident: 1196_CR75
  doi: 10.1609/aaai.v31i1.11231
– volume: 34
  start-page: 1423
  year: 2022
  ident: 1196_CR32
  publication-title: Neural Comput Appl
  doi: 10.1007/s00521-021-06652-w
– volume: 10
  start-page: 2322
  year: 2020
  ident: 1196_CR34
  publication-title: Sci Rep
  doi: 10.1038/s41598-020-59108-x
– volume: 33
  start-page: 577
  issue: 4
  year: 2019
  ident: 1196_CR46
  publication-title: J Exp Theore Artif Intell
  doi: 10.1080/0952813X.2019.1572657
– volume: 9
  start-page: 2499
  year: 2021
  ident: 1196_CR49
  publication-title: Mathematics
  doi: 10.3390/math9192499
– volume: 8
  start-page: 275
  year: 2016
  ident: 1196_CR74
  publication-title: Int J Biomet
  doi: 10.1504/IJBM.2016.082604
– volume: 60
  start-page: 1
  year: 2021
  ident: 1196_CR36
  publication-title: IEEE Trans Geosci Remote Sens
– ident: 1196_CR57
– ident: 1196_CR90
  doi: 10.1109/CVPR.2015.7298594
– ident: 1196_CR72
– ident: 1196_CR52
  doi: 10.1109/AIMS.2013.13
– volume: 78
  start-page: 15751
  year: 2019
  ident: 1196_CR44
  publication-title: Multimed Tools Appl
  doi: 10.1007/s11042-018-7031-0
– volume: 40
  start-page: 332
  year: 2017
  ident: 1196_CR22
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2017.2669035
– volume: 9
  start-page: 2021
  issue: 2499
  year: 2021
  ident: 1196_CR25
  publication-title: Mathematics
– volume: 355
  start-page: 1991
  year: 2018
  ident: 1196_CR26
  publication-title: J Franklin Inst
  doi: 10.1016/j.jfranklin.2017.09.003
– volume: 8
  start-page: 3014
  year: 2020
  ident: 1196_CR27
  publication-title: IEEE Internet Things J
  doi: 10.1109/JIOT.2020.3021763
– ident: 1196_CR70
  doi: 10.1109/ICCV48922.2021.00986
– volume: 7
  start-page: 22
  year: 2019
  ident: 1196_CR10
  publication-title: Int J Comput Sci Eng
– ident: 1196_CR84
– volume: 189
  year: 2022
  ident: 1196_CR14
  publication-title: Expert Syst Appl
– ident: 1196_CR21
– volume: 8
  start-page: 3014
  issue: 4
  year: 2020
  ident: 1196_CR48
  publication-title: IEEE Internet Things J
  doi: 10.1109/JIOT.2020.3021763
– ident: 1196_CR67
  doi: 10.1109/ICCV48922.2021.00060
– ident: 1196_CR69
  doi: 10.1109/ICCV48922.2021.00009
– volume: 137
  start-page: 7
  year: 2019
  ident: 1196_CR78
  publication-title: Measurement
  doi: 10.1016/j.measurement.2019.01.041
– ident: 1196_CR15
  doi: 10.1109/CAC.2018.8623118
– ident: 1196_CR42
  doi: 10.1145/1459359.1459470
– volume: 18
  start-page: 739
  year: 2015
  ident: 1196_CR5
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-015-0499-6
– ident: 1196_CR65
  doi: 10.1109/ICCV.2017.97
SSID ssj0033328
Score 2.3549004
Snippet Pedestrian gender classification (PGC) is a key task in full-body-based pedestrian image analysis and has become an important area in applications like...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1805
SubjectTerms Artificial neural networks
Classification
Computer Science
Datasets
Image analysis
Image retrieval
Modules
Object recognition
Pattern Recognition
Short Paper
Vision
Title ViT-PGC: vision transformer for pedestrian gender classification on small-size dataset
URI https://link.springer.com/article/10.1007/s10044-023-01196-2
https://www.proquest.com/docview/2892307198
Volume 26
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFA_iLl78Fqdz5OBNA_1I0tTbGPtAUTxsY55Kk6YymHXYevGv96VNnIoKQqE0fc3hvbyX32veB0LnudJUaO0RGXJOaJ4LInOeEsliKWPBRZaaH_q3d3w8pddzNrdJYaWLdndHkrWl_pTs5lFKYI8hpk4ZJ2B4Wwx8d7Oup0HP2d8wDOuOqgAEQhIx6ttUmZ_n-LodrTHmt2PRercZ7qJtCxNxr5HrHtrQxT7asZARW4UsYch1ZXBjB2g2W0zI_ah_hZu8cVw5bApUcMMrnem6W0eBH-tGclgZCG1ihmoxYbjKp3S5JOXiTWMTQ1rq6hBNh4NJf0xs9wSiQK0qAqpIAwoOoAAL5tEMXF2VC814rlSmAwoSMdwzBc10DDgtkoynUgoWC6V9zsIjtFk8F_oYYZV5KtY5VeDLUS9jMopTX8VpBM8qkLKNfMfERNnS4qbDxTJZF0U2jE-A8UnN-CRoo4uPb1ZNYY0_qTtONolVsjIBX9GEsfuxaKNLJ6_1699nO_kf-SnaMk3mmwzEDtqsXl71GUCRSnZRqzd6uBl06xX4DlKm1XM
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB5kPejFt7g-c_CmkT6SbOpNRF2feNgVPZUmTWVxXcXWi7_eSZu4KioIhdI0DW0mM_NNMw-A7UIbJo0JqIqFoKwoJFWFyKjiiVKJFDLP7A_9yyvR7bOzW37rgsJK7-3utyRrSf0p2C1gjKKOoTZPmaAoeCcZ2uC8BZMHJ3fnR14Cx3Fc11RFKBDTDmehC5b5eZSvCmmMMr9tjNb65ngW-v5NGzeTh73XSu3pt29JHP_7KXMw4wAoOWhWzDxMmNECzDowShyrl9jk6z34tkW4uRn06PXJ4T5pItJJ5VEv9sITeTa5qeuAjMh9XaKOaAvOrTdSvQAIHuVjNhzScvBmiPVOLU21BP3jo95hl7q6DFQjw1YUmZxFDE1LibIxYDka0bqQhotC69xEDGnNI8QtaPyYBBFgR3GRKSV5IrUJBY-XoTV6GpkVIDoPdGIKptFKZEHOVSfJQp1kHbzWkVJtCD1xUu2SltvaGcN0nG7ZzmWKc5nWc5lGbdj5eOa5SdnxZ-91T_PUsW-ZohVqHeTDRLZh15NwfPv30Vb_130Lprq9y4v04vTqfA2mbSn7Js5xHVrVy6vZQMBTqU23vt8BzRnzuw
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDI7QkBAX3ojBgBy4QbQ-kizlNg3GeE07bNNuVZOmaNIoEy0Xfj1O2rKBAAmpUtU0zcGunc-t7Q-hs0RpKrR2iPQ5JzRJBJEJj4hkgZSB4CKOzAf9xz7vjejdhE2Wqvhttnv1S7KoaTBdmtK8OY-T5lLhm0Mpgf2GmJ5lnIATXgV37JqkrpHXrnyx7_uWXRVAgU9ajLpl2czPa3zdmhZ489svUrvzdLfQRgkZcbvQ8TZa0ekO2izhIy6NM4OhiqGhGttF4_F0SAY3nUtc1JDjvMKpMAtOeK5jbZk7UvxkSeWwMnDa5A9ZlWE4sudoNiPZ9F1jk0-a6XwPjbrXw06PlEwKRIGJ5QTMknoUgkEB3syhMYS9KhGa8USpWHsUtMM8QBoQrugAMFtLMh5JKVgglHY58_dRLX1J9QHCKnZUoBOqIK6jTsxkK4hcFUQtuFaelHXkVkIMVdlm3LBdzMJFg2Qj-BAEH1rBh14dnX8-My-abPw5u1HpJiwNLgshbjQp7W4g6uii0tfi9u-rHf5v-ilaG1x1w4fb_v0RWjfc80VhYgPV8tc3fQwIJZcn9iX8AHs72w4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ViT-PGC%3A+vision+transformer+for+pedestrian+gender+classification+on+small-size+dataset&rft.jtitle=Pattern+analysis+and+applications+%3A+PAA&rft.au=Farhat%2C+Abbas&rft.au=Mussarat%2C+Yasmin&rft.au=Fayyaz+Muhammad&rft.au=Usman%2C+Asim&rft.date=2023-11-01&rft.pub=Springer+Nature+B.V&rft.issn=1433-7541&rft.eissn=1433-755X&rft.volume=26&rft.issue=4&rft.spage=1805&rft.epage=1819&rft_id=info:doi/10.1007%2Fs10044-023-01196-2&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1433-7541&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1433-7541&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1433-7541&client=summon