Towards High Performance Low Complexity Calibration in Appearance Based Gaze Estimation

Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-depe...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 1; pp. 1174 - 1188
Main Authors Chen, Zhaokang, Shi, Bertram E.
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than <inline-formula><tex-math notation="LaTeX">6.3\%</tex-math> <mml:math><mml:mrow><mml:mn>6</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq1-3148386.gif"/> </inline-formula>. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well "straight out of the box," but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk . Source code is available at https://github.com/HKUST-NISL/GEDDnet .
AbstractList Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than 6.3%. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well "straight out of the box," but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk. Source code is available at https://github.com/HKUST-NISL/GEDDnet.Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than 6.3%. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well "straight out of the box," but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk. Source code is available at https://github.com/HKUST-NISL/GEDDnet.
Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than <inline-formula><tex-math notation="LaTeX">6.3\%</tex-math> <mml:math><mml:mrow><mml:mn>6</mml:mn><mml:mo>.</mml:mo><mml:mn>3</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq1-3148386.gif"/> </inline-formula>. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well "straight out of the box," but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk . Source code is available at https://github.com/HKUST-NISL/GEDDnet .
Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than 6.3%. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well "straight out of the box," but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk. Source code is available at https://github.com/HKUST-NISL/GEDDnet.
Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of subject-independent models is limited partly by small intra-subject and large inter-subject variations in appearance, and partly by a latent subject-dependent bias. To improve estimation accuracy, we have previously proposed a gaze decomposition method that decomposes the gaze angle into the sum of a subject-independent gaze estimate from the image and a subject-dependent bias. Estimating the bias from images outperforms previously proposed calibration algorithms, unless the amount of calibration data is prohibitively large. This paper extends that work with a more complete characterization of the interplay between the complexity of the calibration dataset and estimation accuracy. In particular, we analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data using a new NISLGaze dataset, which is well suited for analyzing these effects as it includes more diversity in head positions and orientations for each subject than other datasets. A better understanding of these factors enables low complexity high performance calibration. Our results indicate that using only a single gaze target and single head position is sufficient to achieve high quality calibration. However, it is useful to include variability in head orientation as the subject is gazing at the target. Our proposed estimator based on these studies (GEDDNet) outperforms state-of-the-art methods by more than [Formula Omitted]. One of the surprising findings of our work is that the same estimator yields the best performance both with and without calibration. This is convenient, as the estimator works well ”straight out of the box,” but can be improved if needed by calibration. However, this seems to violate the conventional wisdom that train and test conditions must be matched. To better understand the reasons, we provide a new theoretical analysis that specifies the conditions under which this can be expected. The dataset is available at http://nislgaze.ust.hk . Source code is available at https://github.com/HKUST-NISL/GEDDnet .
Author Chen, Zhaokang
Shi, Bertram E.
Author_xml – sequence: 1
  givenname: Zhaokang
  orcidid: 0000-0003-0237-0358
  surname: Chen
  fullname: Chen, Zhaokang
  email: zchenbc@connect.ust.hk
  organization: Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
– sequence: 2
  givenname: Bertram E.
  orcidid: 0000-0001-9167-7495
  surname: Shi
  fullname: Shi, Bertram E.
  email: eebert@ust.hk
  organization: Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35130143$$D View this record in MEDLINE/PubMed
BookMark eNp9kUFv1DAQhS1URLeFPwASssSFSxbPOLGd43ZV2kqL6GERR8tJJuAqiYOdVSm_nnR36aEHTqORvjczb94ZOxnCQIy9BbEEEOWn7e3qy80SBeJSQm6kUS_YAkGJrMQST9hCgMLMGDSn7CylOyEgL4R8xU5lAXJu5IJ934Z7F5vEr_2Pn_yWYhti74aa-Cbc83Xox45---mBr13nq-gmHwbuB74aR3JxD164RA2_cn-IX6bJ93vmNXvZui7Rm2M9Z98-X27X19nm69XNerXJ6vmGKZPgALGqFTiEFlsS2Ogyd1WlZlsAugWna4ON0IVwmkzRKNA6J2MgLx3Kc_bxMHeM4deO0mR7n2rqOjdQ2CWLCpUpAbWc0Q_P0Luwi8N8nUWdawVCCzFT74_UruqpsWOcHcUH--9lM4AHoI4hpUjtEwLCPuZi97nYx1zsMZdZZJ6Jaj_tHzVF57v_S98dpJ6InnaVWihZaPkXNhuYWQ
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1002_cav_2141
crossref_primary_10_1109_TPAMI_2024_3393571
crossref_primary_10_1109_JIOT_2024_3449409
crossref_primary_10_3389_fpsyg_2024_1309047
crossref_primary_10_1109_TMM_2024_3358948
crossref_primary_10_3390_electronics12071704
crossref_primary_10_3390_s23239604
crossref_primary_10_1007_s10489_024_05778_3
crossref_primary_10_1016_j_displa_2024_102878
crossref_primary_10_1109_TCSVT_2024_3383597
crossref_primary_10_1109_TII_2023_3276322
crossref_primary_10_1109_ACCESS_2023_3317013
crossref_primary_10_1109_TCSVT_2024_3465438
crossref_primary_10_1109_ACCESS_2024_3435370
crossref_primary_10_1109_TCYB_2023_3312392
crossref_primary_10_1109_TMC_2024_3425928
crossref_primary_10_3390_s25061893
Cites_doi 10.1109/CVPR.2015.7299081
10.1109/CVPRW.2017.284
10.1145/2501988.2501994
10.1145/3204493.3204590
10.1109/CVPR.2016.239
10.1007/978-3-030-01249-6_21
10.1145/2980179.2980246
10.1038/s41467-020-18360-5
10.1109/WACV45572.2020.9093419
10.1109/TNNLS.2018.2865525
10.1109/HSI.2017.8005041
10.1109/FG.2018.00019
10.1109/HRI.2016.7451737
10.1007/978-3-030-01264-9_7
10.1109/ICCV.2017.341
10.1145/2857491.2857492
10.1007/978-3-030-01225-0_38
10.1109/TPAMI.2017.2778103
10.1109/CVPR.2018.00053
10.1109/CVPRW.2018.00290
10.1145/3314111.3319845
10.1109/ICCV.2019.00946
10.1109/TBME.2005.863952
10.1109/CVPR.2017.241
10.1109/ICPR.2014.210
10.1145/3058555.3058582
10.1145/3290605.3300646
10.1109/ICCV.2019.00701
10.1145/3204493.3204584
10.1109/TPAMI.2019.2957373
10.1038/s41598-018-22726-7
10.1007/978-3-319-58750-9_51
10.1007/978-3-030-01228-1_24
10.1145/3204493.3204559
10.1145/3204493.3204548
10.1109/ICCVW.2019.00145
10.3389/fnhum.2018.00105
10.1109/CVPR.2019.00793
10.1109/CVPR.2014.235
10.1007/978-3-030-58558-7_22
10.1109/CVPR.2009.5206848
10.1007/978-3-030-01261-8_44
10.1145/2578153.2578190
10.1109/ICCV.2019.00703
10.1109/CVPR.2019.01221
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2022.3148386
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

PubMed
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 1188
ExternalDocumentID 35130143
10_1109_TPAMI_2022_3148386
9706357
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Hong Kong Research Grants Council
  grantid: 16213617
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
5VS
9M8
AAYOK
ABFSI
ADRHT
AETEA
AETIX
AGSQL
AI.
AIBXA
ALLEH
FA8
H~9
IBMZZ
ICLAB
IFJZH
NPM
RIG
RNI
RZB
VH1
XJT
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-31a122bc61a21f2fe02d794abb6022117f1a7c82d0750a7e85d61774e88149a23
IEDL.DBID RIE
ISSN 0162-8828
1939-3539
IngestDate Fri Jul 11 11:32:08 EDT 2025
Sun Jun 29 16:32:40 EDT 2025
Thu Apr 03 07:12:26 EDT 2025
Thu Apr 24 23:04:08 EDT 2025
Tue Jul 01 01:43:03 EDT 2025
Wed Aug 27 02:14:46 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-31a122bc61a21f2fe02d794abb6022117f1a7c82d0750a7e85d61774e88149a23
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-0237-0358
0000-0001-9167-7495
PMID 35130143
PQID 2747610700
PQPubID 85458
PageCount 15
ParticipantIDs ieee_primary_9706357
proquest_miscellaneous_2626891273
crossref_primary_10_1109_TPAMI_2022_3148386
crossref_citationtrail_10_1109_TPAMI_2022_3148386
pubmed_primary_35130143
proquest_journals_2747610700
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-Jan.-1
2023-1-1
2023-Jan
20230101
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – month: 01
  year: 2023
  text: 2023-Jan.-1
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
liu (ref44) 2018
ref12
ref15
ref14
ref53
ref55
ref11
ref10
king (ref52) 2009; 10
palmero (ref30) 2018
ref17
ref16
ref19
cech (ref57) 2016
ref18
ref50
simonyan (ref54) 2014
ref46
ref45
ref48
ref47
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref40
ref35
ref34
ref37
ref36
ref31
ref33
ref32
ioffe (ref56) 2017
ref2
outram (ref5) 2018
ref1
ref38
yu (ref41) 2015
chen (ref26) 2018
ref24
ref23
ref25
ref20
ref22
ref21
yu (ref39) 2018
ref28
ref27
ref29
atchison (ref51) 2000
chen (ref42) 2017
References_xml – ident: ref16
  doi: 10.1109/CVPR.2015.7299081
– year: 2018
  ident: ref44
  article-title: A differential approach for gaze estimation with calibration
  publication-title: Proc Brit Mach Vis Conf
– ident: ref34
  doi: 10.1109/CVPRW.2017.284
– ident: ref22
  doi: 10.1145/2501988.2501994
– ident: ref4
  doi: 10.1145/3204493.3204590
– ident: ref20
  doi: 10.1109/CVPR.2016.239
– ident: ref17
  doi: 10.1007/978-3-030-01249-6_21
– ident: ref6
  doi: 10.1145/2980179.2980246
– year: 2016
  ident: ref57
  article-title: Real-time eye blink detection using facial landmarks
  publication-title: Proc Comput Vis Winter Workshop
– ident: ref14
  doi: 10.1038/s41467-020-18360-5
– ident: ref36
  doi: 10.1109/WACV45572.2020.9093419
– ident: ref29
  doi: 10.1109/TNNLS.2018.2865525
– year: 2018
  ident: ref30
  article-title: Recurrent CNN for 3D gaze estimation using appearance and shape cues
  publication-title: Proc Brit Mach Vis Conf
– ident: ref2
  doi: 10.1109/HSI.2017.8005041
– ident: ref53
  doi: 10.1109/FG.2018.00019
– ident: ref3
  doi: 10.1109/HRI.2016.7451737
– ident: ref27
  doi: 10.1007/978-3-030-01264-9_7
– ident: ref28
  doi: 10.1109/ICCV.2017.341
– ident: ref25
  doi: 10.1145/2857491.2857492
– ident: ref9
  doi: 10.1007/978-3-030-01225-0_38
– volume: 10
  start-page: 1755
  year: 2009
  ident: ref52
  article-title: Dlib-ml: A machine learning toolkit
  publication-title: J Mach Learn Res
– ident: ref15
  doi: 10.1109/TPAMI.2017.2778103
– year: 2017
  ident: ref42
  article-title: Rethinking atrous convolution for semantic image segmentation
– ident: ref24
  doi: 10.1109/CVPR.2018.00053
– ident: ref32
  doi: 10.1109/CVPRW.2018.00290
– year: 2000
  ident: ref51
  publication-title: Optics of the Human Eye
– ident: ref13
  doi: 10.1145/3314111.3319845
– start-page: 1942
  year: 2017
  ident: ref56
  article-title: Batch renormalization: Towards reducing minibatch dependence in batch-normalized models
  publication-title: Proc 31st Int Conf Neural Inf Process Syst
– ident: ref48
  doi: 10.1109/ICCV.2019.00946
– year: 2015
  ident: ref41
  article-title: Multi-scale context aggregation by dilated convolutions
– start-page: 456
  year: 2018
  ident: ref39
  article-title: Deep multitask gaze estimation with a constrained landmark-gaze model
  publication-title: Proc Eur Conf Comput Vis Workshops
– ident: ref35
  doi: 10.1109/TBME.2005.863952
– ident: ref21
  doi: 10.1109/CVPR.2017.241
– ident: ref38
  doi: 10.1109/ICPR.2014.210
– ident: ref1
  doi: 10.1145/3058555.3058582
– ident: ref45
  doi: 10.1145/3290605.3300646
– ident: ref49
  doi: 10.1109/ICCV.2019.00701
– ident: ref12
  doi: 10.1145/3204493.3204584
– ident: ref46
  doi: 10.1109/TPAMI.2019.2957373
– ident: ref8
  doi: 10.1038/s41598-018-22726-7
– ident: ref31
  doi: 10.1007/978-3-319-58750-9_51
– ident: ref10
  doi: 10.1007/978-3-030-01228-1_24
– ident: ref11
  doi: 10.1145/3204493.3204559
– ident: ref37
  doi: 10.1145/3204493.3204548
– ident: ref43
  doi: 10.1109/ICCVW.2019.00145
– start-page: 309
  year: 2018
  ident: ref26
  article-title: Appearance-based gaze estimation using dilated-convolutions
  publication-title: Proc Asian Conf Comput Vis
– year: 2018
  ident: ref5
  article-title: Anyorbit: Orbital navigation in virtual environments with eye-tracking
  publication-title: Proc Eye Tracking Res and Appl Symp
– ident: ref7
  doi: 10.3389/fnhum.2018.00105
– ident: ref33
  doi: 10.1109/CVPR.2019.00793
– ident: ref23
  doi: 10.1109/CVPR.2014.235
– ident: ref50
  doi: 10.1007/978-3-030-58558-7_22
– ident: ref55
  doi: 10.1109/CVPR.2009.5206848
– year: 2014
  ident: ref54
  article-title: Very deep convolutional networks for large-scale image recognition
– ident: ref40
  doi: 10.1007/978-3-030-01261-8_44
– ident: ref18
  doi: 10.1145/2578153.2578190
– ident: ref19
  doi: 10.1109/ICCV.2019.00703
– ident: ref47
  doi: 10.1109/CVPR.2019.01221
SSID ssj0014503
Score 2.5227778
Snippet Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking from commonly available hardware. The accuracy of...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1174
SubjectTerms Accuracy
Algorithms
Appearance-based gaze estimation
Bias
Calibration
Cameras
Color imagery
Complexity
Complexity theory
Datasets
Decomposition
deep neural networks
dilated convolutions
Estimation
eye tracking
Faces
Gaze tracking
Head movement
low complexity calibration
Magnetic heads
Model accuracy
NISLGaze dataset
Source code
State-of-the-art reviews
subject-dependent
Title Towards High Performance Low Complexity Calibration in Appearance Based Gaze Estimation
URI https://ieeexplore.ieee.org/document/9706357
https://www.ncbi.nlm.nih.gov/pubmed/35130143
https://www.proquest.com/docview/2747610700
https://www.proquest.com/docview/2626891273
Volume 45
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BJziUd5uyIFfiVnaJnayTHAGBlqqLOCyCW-RXJNQqW0FWqPx6ZuwkRRWtuEWKnYdnxv4-ex4AhzZzJpbODhOlLYXkJMOi0BKpilZGyDEP-5DTKzm5Sb_dje-W4KiPhXHOeeczN6JLf5Zv52ZBW2XHRRZT-rRlWEbiFmK1-hODdOyrICOCQQtHGtEFyMTF8ez6ZHqJVFAIZKhpnuRUtyihz_DBOq_WI19g5d9Y0685F-sw7b42uJr8GC0aPTLPfyVyfO_vbMCHFnyyk6Atm7Dk6i1Y7wo7sNbOt2DtVZbCbbidedfaR0Y-Iez6T6QB-z5_YtSbkmo2vxnFeemgUey-Zghw0Yx8w1NcKy0jxyJ2jnNKCJfcgZuL89nZZNjWYxgaHK0Gp2vFhdBGciV4JSoXC4vmrLSWOLacZxVXmcmFJRiiMpePLeKjLHV5jjxMiWQXVup57T4Bs5WQVlKmICVTWSHpQ-CWIk13CgGESSLgnVRK0yYrp5oZP0tPWuKi9EItSahlK9QIvvZ9foVUHf9tvU0S6Vu2wohg0Am_bK35sSTmjjAzi-MIvvS30Q7pcEXVbr7ANsgM84IjGozgY1Ca_tmdrn1--517sEpF7MPGzgBWmoeF20eo0-gDr-MvEJH00Q
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3LbtNAFL2qygJYUGh5GAoMEqxQUs_YHtsLFgVaJTSpukhFd2ZelhDIQcRRVb6FX-HfuHfGNhUCdpXYRcrYydhn7j1n5j4AntvcmVg6O0qUtpSSk4zKUkuUKloZITMe9iHnx3Jymr47y8424PuQC-Oc88Fnbkwf_Vm-XZo1bZXtlXlM5dO6EMojd3GOAm31avoW3-YLIQ4PFm8mo66HwMgkGW_RxCguhDaSK8FrUbtYWISg0lqi9-I8r7nKTSEsuU6VuyKz6NPz1BUFagdFZQ3QwF9DnpGJkB02nFGkme-7jJwJbQoKlz4lJy73Fif78ymKTyFQE6dFUlCnpIQm7tODLnlA39Ll7-zWe7nDLfjRP58Q3PJpvG712Hz7rXTk__oAb8Otjl6z_bAe7sCGa7Zhq29dwTpLtg03L9Vh3IH3Cx88vGIU9cJOfuVSsNnynNHVVDa0vWCUyabDmmEfG4YUHqfoB75GNmAZhU6xA7SaISH0LpxeyWTvwWazbNwDYLYW0kqqhaRkKmuUtUhN0zwVTiFFMkkEvEdBZbpy7NQV5HPlZVlcVh5EFYGo6kAUwcvhmi-hGMk_R-8QAoaR3cuPYLcHW9XZq1VFexNIpPM4juDZ8DVaGjo-Uo1brnEMat-i5Mh3I7gfQDrcu8f2wz__5lO4PlnMZ9Vsenz0CG7gv0zCNtYubLZf1-4xErtWP_Hri8GHq8bjT-HwT2s
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+High+Performance+Low+Complexity+Calibration+in+Appearance+Based+Gaze+Estimation&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Chen%2C+Zhaokang&rft.au=Shi%2C+Bertram+E&rft.date=2023-01-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0162-8828&rft.eissn=1939-3539&rft.volume=45&rft.issue=1&rft.spage=1174&rft_id=info:doi/10.1109%2FTPAMI.2022.3148386&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon