Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze
Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation t...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 21; no. 12; p. 4143 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
16.06.2021
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods. |
---|---|
AbstractList | Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods. Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods.Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of diagnostic eye tracking studies, are commonly encoded as rectangular areas of interest (AOIs) per frame. Because it is a tedious manual annotation task, the automatic detection and annotation of visual attention to AOIs can accelerate and objectify eye tracking research, in particular for mobile eye tracking with egocentric video feeds. In this work, we implement two methods to automatically detect visual attention to AOIs using pre-trained deep learning models for image classification and object detection. Furthermore, we develop an evaluation framework based on the VISUS dataset and well-known performance metrics from the field of activity recognition. We systematically evaluate our methods within this framework, discuss potentials and limitations, and propose ways to improve the performance of future automatic visual attention detection methods. |
Author | Barz, Michael Sonntag, Daniel |
AuthorAffiliation | 2 Applied Artificial Intelligence, Oldenburg University, Marie-Curie Str. 1, 26129 Oldenburg, Germany 1 German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Stuhlsatzenhausweg 3, Saarland Informatics Campus D3_2, 66123 Saarbrücken, Germany; daniel.sonntag@dfki.de |
AuthorAffiliation_xml | – name: 1 German Research Center for Artificial Intelligence (DFKI), Interactive Machine Learning Department, Stuhlsatzenhausweg 3, Saarland Informatics Campus D3_2, 66123 Saarbrücken, Germany; daniel.sonntag@dfki.de – name: 2 Applied Artificial Intelligence, Oldenburg University, Marie-Curie Str. 1, 26129 Oldenburg, Germany |
Author_xml | – sequence: 1 givenname: Michael orcidid: 0000-0001-6730-2466 surname: Barz fullname: Barz, Michael – sequence: 2 givenname: Daniel orcidid: 0000-0002-8857-8709 surname: Sonntag fullname: Sonntag, Daniel |
BookMark | eNplkk1v1DAQhi1URD_gwD-wxAUOof5M4gvSailtpVZwaLlajjNZvCT2YjtI5dfjdAuixQd7NH7nGXtmjtGBDx4Qek3Je84VOU2MUiao4M_QERVMVC1j5OAf-xAdp7QlhHHO2xfokAtG2obXRyiv5hwmk53FX12azYhXOYPPLnj8ETLYe2sIEV-Hzo2Az-4A30Rjvzu_wbdp2b9EqIrLeejxOky7OUNcaEvkdehhTNj4Hl_Mk_H43PyCl-j5YMYErx7OE3T76exmfVFdfT6_XK-uKitEnSurlOR1M9AauFSWdZZ3HSeyl6JmSjYdFZaJoSFdYzphZa2UrUE1QIzqrDD8BF3uuX0wW72LbjLxTgfj9L0jxI02sXx9BM3ZIJmQFoRRoqzWyr7uayFb2nBQqrA-7Fm7uZugt6VG0YyPoI9vvPumN-GnbhmXRPACePsAiOHHDCnrySUL42g8hDlpJkUraOmMLNI3T6TbMEdfSrWoJG3rpqFFdbpX2RhSijBo67JZ-lXyu1FTopfp0H-no0S8exLx5_n_a38DVY25Qg |
CitedBy_id | crossref_primary_10_1109_ACCESS_2024_3462109 crossref_primary_10_3390_s24196479 crossref_primary_10_3389_fphys_2024_1366910 crossref_primary_10_15622_ia_23_2_8 crossref_primary_10_3390_s24092666 crossref_primary_10_1117_1_OE_62_7_073101 crossref_primary_10_1109_ACCESS_2022_3187969 crossref_primary_10_36548_jsws_2024_4_003 crossref_primary_10_1016_j_aeue_2023_155023 crossref_primary_10_1080_19312458_2024_2443396 crossref_primary_10_1080_00913367_2023_2258388 crossref_primary_10_3389_frai_2024_1391745 crossref_primary_10_1145_3530887 crossref_primary_10_3390_electronics12245007 |
Cites_doi | 10.1167/7.14.16 10.1007/978-3-319-54526-4 10.1109/ACCESS.2020.2980901 10.1109/TVCG.2016.2598695 10.16910/jemr.5.2.6 10.1145/2750858.2807520 10.1145/2857491.2857532 10.1145/1743666.1743729 10.1145/2168556.2168570 10.1145/2968219.2971389 10.1145/2070719.2070720 10.1145/3204493.3204538 10.1109/TBME.2004.831523 10.1109/ICCVW.2017.322 10.1145/3301400 10.1145/1889681.1889687 10.1109/CVPR.2016.209 10.1109/CVPR.2015.7298594 10.1109/TNNLS.2018.2876865 10.1109/THMS.2019.2892919 10.1145/3450341.3458766 10.1109/HRI.2016.7451737 10.1145/2470654.2470697 10.1007/s11263-015-0816-y 10.1007/978-3-642-33709-3 10.1145/2939381 10.1016/j.neucom.2020.01.028 10.1177/1541931213601362 10.16910/jemr.11.6.6 10.1145/2893485 10.1109/CVPR.2015.7298625 10.1145/2499474.2499481 10.1145/3379155.3391314 10.1145/2370216.2370363 10.1109/FG.2018.00020 10.1109/CVPR.2019.00441 10.1109/ICCV.2017.322 10.1145/1378773.1378777 10.1145/2499474.2499479 10.1145/2882970 10.1145/2663204.2663275 10.1080/13506280902793843 10.1023/B:VISI.0000029664.99615.94 10.1145/2857491.2857493 10.1109/TPAMI.2012.89 10.1145/2633043 10.1145/2499474.2499480 10.15607/RSS.2017.XIII.012 10.1145/3229434.3229439 10.1145/3185517 10.1145/2669557.2669558 10.1016/S0042-6989(01)00102-X 10.1145/2764921 10.3390/s21062234 10.1007/978-3-319-10602-1_48 10.1007/978-3-030-01228-1_38 10.1109/CVPR.2015.7298700 10.1145/2638728.2641691 10.1145/2070719.2070722 10.1007/978-981-13-1056-0 10.1109/CVPR.2016.90 10.1145/2029956.2029971 10.1145/3204493.3204536 |
ContentType | Journal Article |
Copyright | 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2021 by the authors. 2021 |
Copyright_xml | – notice: 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2021 by the authors. 2021 |
DBID | AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
DOI | 10.3390/s21124143 |
DatabaseName | CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College ProQuest Central Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) ProQuest Health & Medical Collection Medical Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central China ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | CrossRef Publicly Available Content Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1424-8220 |
ExternalDocumentID | oai_doaj_org_article_32f5245ce4a944448c5d6d6458173e99 PMC8235043 10_3390_s21124143 |
GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFKRA AFZYC ALIPV ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M 3V. 7XB 8FK AZQEC DWQXO K9. PJZUB PKEHL PPXIY PQEST PQUKI PRINS 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c446t-c995367f16e359c2bc3bb305d5462957b14c24f70b7ab4c5699c6e97e0a9bc4a3 |
IEDL.DBID | M48 |
ISSN | 1424-8220 |
IngestDate | Wed Aug 27 01:31:31 EDT 2025 Thu Aug 21 18:01:57 EDT 2025 Thu Jul 10 18:03:58 EDT 2025 Fri Jul 25 20:26:53 EDT 2025 Tue Jul 01 03:56:14 EDT 2025 Thu Apr 24 22:53:58 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c446t-c995367f16e359c2bc3bb305d5462957b14c24f70b7ab4c5699c6e97e0a9bc4a3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-6730-2466 0000-0002-8857-8709 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.3390/s21124143 |
PMID | 34208736 |
PQID | 2545186771 |
PQPubID | 2032333 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_32f5245ce4a944448c5d6d6458173e99 pubmedcentral_primary_oai_pubmedcentral_nih_gov_8235043 proquest_miscellaneous_2548413425 proquest_journals_2545186771 crossref_citationtrail_10_3390_s21124143 crossref_primary_10_3390_s21124143 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210616 |
PublicationDateYYYYMMDD | 2021-06-16 |
PublicationDate_xml | – month: 6 year: 2021 text: 20210616 day: 16 |
PublicationDecade | 2020 |
PublicationPlace | Basel |
PublicationPlace_xml | – name: Basel |
PublicationTitle | Sensors (Basel, Switzerland) |
PublicationYear | 2021 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | Lowe (ref_60) 2004; 60 Dudley (ref_78) 2018; 8 Ishii (ref_29) 2016; 6 Fong (ref_54) 2016; 60 Panetta (ref_12) 2019; 49 ref_14 ref_58 ref_13 ref_57 ref_56 ref_11 ref_10 Steichen (ref_50) 2014; 4 ref_52 Ishii (ref_28) 2013; 3 Jokinen (ref_30) 2013; 3 ref_17 Conati (ref_51) 2020; 10 ref_15 Wolf (ref_16) 2018; 11 ref_59 Toyama (ref_61) 2015; Volume 9324 Yarbus (ref_20) 1967; 6 Borji (ref_19) 2013; 35 Zhao (ref_68) 2019; 30 Minhas (ref_48) 2017; 7 ref_69 ref_67 ref_66 ref_64 Buscher (ref_35) 2012; 1 ref_62 Chai (ref_24) 2012; 1 Panetta (ref_55) 2020; 8 ref_72 DeAngelus (ref_22) 2009; 17 ref_71 ref_70 Ward (ref_18) 2011; 2 Sattar (ref_38) 2020; 387 ref_36 Evans (ref_53) 2012; 5 ref_79 ref_34 Qvarfordt (ref_27) 2017; Volume 1 Baur (ref_33) 2015; 5 ref_77 ref_76 ref_31 ref_75 ref_74 ref_73 Yu (ref_3) 2004; 51 Rothkopf (ref_23) 2016; 7 ref_39 ref_37 Kurzhals (ref_8) 2017; 23 ref_80 Land (ref_21) 2001; 41 ref_47 ref_46 ref_45 ref_44 ref_43 Russakovsky (ref_63) 2015; 115 Xu (ref_32) 2016; 6 ref_42 ref_41 ref_40 ref_1 Batliner (ref_65) 2020; 34 ref_2 ref_49 ref_9 Nakano (ref_26) 2016; 6 ref_5 ref_4 Chai (ref_25) 2013; 3 ref_7 ref_6 |
References_xml | – volume: Volume 9324 start-page: 316 year: 2015 ident: ref_61 article-title: Towards episodic memory support for dementia patients by recognizing objects, faces and text in eye gaze publication-title: KI 2015: Advances in Artificial Intelligence – volume: 7 start-page: 16 year: 2016 ident: ref_23 article-title: Task and context determine where you look publication-title: J. Vis. doi: 10.1167/7.14.16 – ident: ref_9 – ident: ref_49 – ident: ref_75 doi: 10.1007/978-3-319-54526-4 – volume: 8 start-page: 52278 year: 2020 ident: ref_55 article-title: ISeeColor: Method for Advanced Visual Analytics of Eye Tracking Data publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2980901 – volume: 23 start-page: 301 year: 2017 ident: ref_8 article-title: Visual Analytics for Mobile Eye Tracking publication-title: IEEE Trans. Vis. Comput. Graph. doi: 10.1109/TVCG.2016.2598695 – volume: 5 start-page: 19 year: 2012 ident: ref_53 article-title: Collecting and Analyzing Eye-Tracking Data in Outdoor Environments publication-title: J. Eye Mov. Res. doi: 10.16910/jemr.5.2.6 – ident: ref_39 – ident: ref_42 doi: 10.1145/2750858.2807520 – ident: ref_4 doi: 10.1145/2857491.2857532 – ident: ref_10 doi: 10.1145/1743666.1743729 – ident: ref_11 doi: 10.1145/2168556.2168570 – ident: ref_7 doi: 10.1145/2968219.2971389 – ident: ref_1 – volume: 6 start-page: 222 year: 1967 ident: ref_20 article-title: Eye movements and vision publication-title: Neuropsychologia – volume: 1 start-page: 1 year: 2012 ident: ref_24 article-title: Introduction to the special issue on eye gaze in intelligent human-machine interaction publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2070719.2070720 – ident: ref_74 doi: 10.1145/3204493.3204538 – volume: 51 start-page: 1765 year: 2004 ident: ref_3 article-title: A new methodology for determining point-of-gaze in head-mounted eye tracking systems publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2004.831523 – ident: ref_37 doi: 10.1109/ICCVW.2017.322 – volume: Volume 1 start-page: 365 year: 2017 ident: ref_27 article-title: Gaze-informed multimodal interaction publication-title: The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations – ident: ref_77 – volume: 10 start-page: 12 year: 2020 ident: ref_51 article-title: Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/3301400 – volume: 2 start-page: 1 year: 2011 ident: ref_18 article-title: Performance metrics for activity recognition publication-title: ACM Trans. Intell. Syst. Technol. doi: 10.1145/1889681.1889687 – ident: ref_56 – ident: ref_70 doi: 10.1109/CVPR.2016.209 – ident: ref_62 doi: 10.1109/CVPR.2015.7298594 – volume: 30 start-page: 3212 year: 2019 ident: ref_68 article-title: Object Detection With Deep Learning: A Review publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2018.2876865 – volume: 49 start-page: 268 year: 2019 ident: ref_12 article-title: Software Architecture for Automating Cognitive Science Eye-Tracking Data Analysis and Object Annotation publication-title: IEEE Trans. Hum. Mach. Syst. doi: 10.1109/THMS.2019.2892919 – ident: ref_72 doi: 10.1145/3450341.3458766 – ident: ref_17 doi: 10.1109/HRI.2016.7451737 – ident: ref_41 doi: 10.1145/2470654.2470697 – volume: 115 start-page: 211 year: 2015 ident: ref_63 article-title: ImageNet Large Scale Visual Recognition Challenge publication-title: Int. J. Comput. Vis. (IJCV) doi: 10.1007/s11263-015-0816-y – ident: ref_44 doi: 10.1007/978-3-642-33709-3 – ident: ref_66 – ident: ref_13 – volume: 7 start-page: 1 year: 2017 ident: ref_48 article-title: Added value of gaze-exploiting semantic representation to allow robots inferring human behaviors publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2939381 – volume: 387 start-page: 369 year: 2020 ident: ref_38 article-title: Deep gaze pooling: Inferring and visually decoding search intents from human gaze fixations publication-title: Neurocomputing doi: 10.1016/j.neucom.2020.01.028 – volume: 60 start-page: 1569 year: 2016 ident: ref_54 article-title: Making Sense of Mobile Eye-Tracking Data in the Real-World: A Human-in-the-Loop Analysis Approach publication-title: Proc. Hum. Factors Ergon. Soc. Annu. Meet. doi: 10.1177/1541931213601362 – volume: 11 start-page: 6 year: 2018 ident: ref_16 article-title: Automating areas of interest analysis in mobile eye tracking experiments based on machine learning publication-title: J. Eye Mov. Res. doi: 10.16910/jemr.11.6.6 – volume: 6 start-page: 1 year: 2016 ident: ref_26 article-title: Introduction to the special issue on new directions in eye gaze for interactive intelligent systems publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2893485 – ident: ref_76 – volume: 6 start-page: 4:1 year: 2016 ident: ref_29 article-title: Prediction of Who Will Be the Next Speaker and When Using Gaze Behavior in Multiparty Meetings publication-title: ACM Trans. Interact. Intell. Syst. – ident: ref_34 – ident: ref_45 doi: 10.1109/CVPR.2015.7298625 – volume: 3 start-page: 1 year: 2013 ident: ref_30 article-title: Gaze and turn-taking behavior in casual conversational interactions publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2499474.2499481 – ident: ref_40 doi: 10.1145/3379155.3391314 – ident: ref_52 doi: 10.1145/2370216.2370363 – ident: ref_58 doi: 10.1109/FG.2018.00020 – ident: ref_57 doi: 10.1109/CVPR.2019.00441 – ident: ref_64 doi: 10.1109/ICCV.2017.322 – ident: ref_31 doi: 10.1145/1378773.1378777 – ident: ref_73 – volume: 3 start-page: 1 year: 2013 ident: ref_25 article-title: Introduction to the special section on eye gaze and conversation publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2499474.2499479 – volume: 6 start-page: 1 year: 2016 ident: ref_32 article-title: See you see me: The role of Eye contact in multimodal human-robot interaction publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2882970 – ident: ref_6 doi: 10.1145/2663204.2663275 – volume: 17 start-page: 790 year: 2009 ident: ref_22 article-title: Top-down control of eye movements: Yarbus revisited publication-title: Vis. Cogn. doi: 10.1080/13506280902793843 – volume: 34 start-page: 505 year: 2020 ident: ref_65 article-title: Automated areas of interest analysis for usability studies of tangible screen-based user interfaces using mobile eye tracking publication-title: AI EDAM – volume: 60 start-page: 91 year: 2004 ident: ref_60 article-title: Distinctive image features from scale-invariant keypoints publication-title: Int. J. Comput. Vis. doi: 10.1023/B:VISI.0000029664.99615.94 – ident: ref_79 doi: 10.1145/2857491.2857493 – volume: 35 start-page: 185 year: 2013 ident: ref_19 article-title: State-of-the-Art in Visual Attention Modeling publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2012.89 – volume: 4 start-page: 1 year: 2014 ident: ref_50 article-title: Inferring visualization task properties, user performance, and user cognitive abilities from eye gaze data publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2633043 – volume: 3 start-page: 11:1 year: 2013 ident: ref_28 article-title: Gaze awareness in conversational agents: Estimating a user’s conversational engagement from eye gaze publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2499474.2499480 – ident: ref_71 doi: 10.15607/RSS.2017.XIII.012 – ident: ref_43 doi: 10.1145/3229434.3229439 – volume: 8 start-page: 1 year: 2018 ident: ref_78 article-title: A Review of User Interface Design for Interactive Machine Learning publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/3185517 – ident: ref_2 doi: 10.1145/2669557.2669558 – ident: ref_15 – volume: 41 start-page: 3559 year: 2001 ident: ref_21 article-title: In what ways do eye movements contribute to everyday activities? publication-title: Vis. Res. doi: 10.1016/S0042-6989(01)00102-X – volume: 5 start-page: 1 year: 2015 ident: ref_33 article-title: Context-aware automated analysis and annotation of social human-agent interactions publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2764921 – ident: ref_80 doi: 10.3390/s21062234 – ident: ref_67 doi: 10.1007/978-3-319-10602-1_48 – ident: ref_46 doi: 10.1007/978-3-030-01228-1_38 – ident: ref_36 doi: 10.1109/CVPR.2015.7298700 – ident: ref_47 doi: 10.1145/2638728.2641691 – volume: 1 start-page: 1 year: 2012 ident: ref_35 article-title: Attentive documents: Eye tracking as implicit feedback for information retrieval and beyond publication-title: ACM Trans. Interact. Intell. Syst. doi: 10.1145/2070719.2070722 – ident: ref_59 doi: 10.1007/978-981-13-1056-0 – ident: ref_69 doi: 10.1109/CVPR.2016.90 – ident: ref_14 doi: 10.1145/2029956.2029971 – ident: ref_5 doi: 10.1145/3204493.3204536 |
SSID | ssj0023338 |
Score | 2.4343858 |
Snippet | Processing visual stimuli in a scene is essential for the human brain to make situation-aware decisions. These stimuli, which are prevalent subjects of... |
SourceID | doaj pubmedcentral proquest crossref |
SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
StartPage | 4143 |
SubjectTerms | Annotations area of interest Automation Computer vision Eye movements eye tracking eye tracking data analysis Human-computer interaction Information retrieval Robots visual attention |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3NS-UwEA_iSQ_L6rps1w-y4sFL8TWZfB39RATFg4q30qQpK0if-PoO_vfOpH2PV1jYiz0205JmJp35JZPfMHYka4RcVrl8EulIjm3q3GKcnkOAiQUHTZGOj93e6etHuHlWzyulvignrKcH7gfuRIpGCVAhQuUALxtUrWsNyhZGRpeO7qHPW4CpAWpJRF49j5BEUH8yQ5iDrgrkyPskkv5RZDnOi1xxNFff2bchQuSnfc-22Fpst9nmCm_gD9adzrtp4lrlTy-zOUl3XZ-3yC9il7KrWo7hKL-depz2_PIjcvRKgdbFecoS4PfvMX-g-hCx5ovSDvQ2epIKpL3OeNXWPC3yc0oJ2mGPV5cP59f5UD4hD4jxujw42po1TaGjVC4IH6T3OL1rBVo4ZXwBQUBjJt5UHoLSzgUdnYmTyvkAlfzJ1ttpG38xrutQeIXAVeIwNtJWojImysYZL4wHyNjxYljLMHCLU4mL1xIxBmmgXGogY4dL0beeUONfQmekm6UAcWCnG2gZ5WAZ5f8sI2N7C82Ww8SclYiHVeLwKzL2Z9mMU4r2Sao2TudJxqJvx79ZxszIIkYdGre0L38TObcVkkjhfn_FF-yyDUEpNFQqSe-x9e59HvcxBur8QTL3T3ipA38 priority: 102 providerName: Directory of Open Access Journals – databaseName: ProQuest Technology Collection dbid: 8FG link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT9wwELYovbQHRF9qeFSm6qGXiI3fPiGgLAiJqgeouEXxIy0SSugme-DfM-PNpkSqmmM8ecjj8cxnj78h5AsPALmMtPks4pEcU4fcQJyeCy9mRlhRF-n42NV3dXEjLm_l7bDg1g1ples5MU3UofW4Rn4IQEYm8rXi6OFPjlWjcHd1KKHxgrwswNNgSpeZn4-AiwP-WrEJcYD2hx2AHXBYgk98UKLqn8SX0-zIZ-5mvk22hjiRHq8U-4ZsxOYtef2MPfAd6Y-XfZsYV-nPu26J0n2_yl6k32KfcqwaCkEpvWodGD89e4wUfJPH1XGacgXoj0XMr7FKRAx0XeAB34ZPYpm0-45WTaBpqZ9iYtB7cjM_uz69yIciCrkHpNfn3uIGra4LFbm0njnPnQMjD1IoZqV2hfBM1HrmdOWEl8par6LVcVZZ50XFP5DNpm3iR0JV8IWTAF85dGPNTcUqrSOvrXZMOyEy8nXdraUfGMax0MV9CUgDNVCOGsjI51H0YUWr8S-hE9TNKIBM2OlGu_hVDoZVclZLJqSPorICLuNlUEEJaQrNo7UZ2VtrthzMsyv_DqaMHIzNYFi4W1I1sV0mGQMeHua0jOjJiJj80LSlufudKLoN40gNt_P_j--SVwxTZLAUktojm_1iGfchxundpzSQnwC8wPw0 priority: 102 providerName: ProQuest |
Title | Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human Gaze |
URI | https://www.proquest.com/docview/2545186771 https://www.proquest.com/docview/2548413425 https://pubmed.ncbi.nlm.nih.gov/PMC8235043 https://doaj.org/article/32f5245ce4a944448c5d6d6458173e99 |
Volume | 21 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lj9MwEB7t4wIHxFMElsogDlwCjR9xfEBoF1pWSF2t0Bb1FsWOAytVCbSpxP57Ztwk2kh7oIdUaiZR47Ez89nj7wN4K0qEXJky8dTTlpysKuMM8_RYOjnNpJFVEraPLS7S86X8tlKrA-g1NrsG3N4J7UhParlZv__75-YTDviPhDgRsn_YIojBQCTFIRzjtyYhg4UcFhO4EEHQmvZ0xRgPp3uCofGlo7AU2PtHKee4YPJWBJo_hAdd6shO975-BAe-fgz3bxEKPoH2dNc2gYSV_bje7si6bfcFjeyLb0PZVc0wT2WLxuL7gM1uPMNw5WjCnIXyAXa58fEVCUf4kvWaD3Q3upKU09ZbVtQlC7P_jGqFnsJyPrv6fB53ugqxQ_DXxs7Qmq2uktQLZRy3TliL475UMuVGaZtIx2Wlp1YXVjqVGuNSb7SfFsY6WYhncFQ3tX8OLC1dYhUiWoHNWIms4IXWXlRGW66tlBG865s1dx3pOGlfrHMEH-SBfPBABG8G0997po27jM7IN4MBkWOHH5rNz7wba7ngleJSOS8LI_GTOVWmZSpVlmjhjYngpPds3ne4HIGyCuR-SQSvh9M41mgBpah9sws2GQZ9fM1FoEc9YvSHxmfq61-BtTvjgtjiXvzPY76Ee5xqZ0gjKT2Bo3az868w-WntBA71SuMxm3-dwPHZ7OLy-yRMJExCp_8HFHIFMg |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELZKOQAHxFMEChgEEpeoiR9xfECo0C5b2q04bFFvaew4UKlKyiYr1D_Fb2TGedBIiFv3GE-ykWc8M188_oaQN7wAyJVKHUYOj-SkZRGmkKeHwoooFVqUsT8-tjhK5sfiy4k82SC_h7MwWFY5-ETvqIva4jfybQAy0pOvxR8ufobYNQp3V4cWGp1ZHLjLXwDZmvf7u6Dft4zN9paf5mHfVSC0AH3a0GrcsVRlnDgutWXGcmPA6gspEqalMrGwTJQqMio3wspEa5s4rVyUa2NFzuG5N8hNwSGS48n02ecR4HHAex17EQxG2w2AKwiQgk9inm8NMMlnp9WYV8Lb7B652-eldKczpPtkw1UPyJ0rbIUPSbuzbmvP8Eq_nTVrlG7brlqS7rrW13RVFJJguqgNOBu6d-koxEKLX-Opr02gX1cuXGJXClfQoaEEPg3vxLZs5w3Nq4L6rQWKhUiPyPG1TO9jslnVlXtCaFLY2EiAyxymseRpznKlHC-1MkwZIQLybpjWzPaM5thY4zwDZIMayEYNBOT1KHrR0Xj8S-gj6mYUQOZtf6Fefc_6hZxxVkompHUi1wJ-qZVFUiRCprHiTuuAbA2azXp30GR_jTcgr8ZhWMi4O5NXrl57mRQyCvChAVETi5i80HSkOvvhKcFTxpGK7un___wluTVfLg6zw_2jg2fkNsPyHGzDlGyRzXa1ds8hv2rNC2_UlJxe9yr6A9GmOLE |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELbKVkJwQDxFoIBBIHGJduNHHB8QatldtZSuVqhFvaWx40ClKim7WaH-NX4dM86DRkLcmmMyecjz8Hzx-BtC3vIcIFcidThxuCUnKfIwgTw9FFZMEqFFEfntY0eLeP9EfD6Vp1vkd7cXBssqu5joA3VeWfxHPgYgIz35WjQu2rKI5XT-8fJniB2kcKW1a6fRmMihu_oF8G394WAKun7H2Hx2_Gk_bDsMhBZgUB1ajauXqohix6W2zFhuDHhALkXMtFQmEpaJQk2MyoywMtbaxk4rN8m0sSLj8NxbZFshKhqR7b3ZYvm1h3sc0F_DZcS5nozXALVguhR8MAP6RgGD7HZYm3ltspvfJ_faLJXuNmb1gGy58iG5e4278BGpdzd15fle6bfz9Qal67qpnaRTV_sKr5JCSkyPKgOhh86uHIWZ0eK_eeorFehy5cJj7FHhctq1l8Cn4Z3YpO1iTbMyp36hgWJZ0mNyciMD_ISMyqp0TwmNcxsZCeCZwzAWPMlYppTjhVaGKSNEQN53w5ralt8c22xcpIBzUANpr4GAvOlFLxtSj38J7aFuegHk4fYnqtX3tHXrlLNCMiGtE5kWcCRW5nEeC5lEijutA7LTaTZtg8M6_WvKAXndXwa3xrWarHTVxsskkF9ARA2IGljE4IOGV8rzH54gPGEcieme_f_lr8ht8KD0y8Hi8Dm5w7BWB3syxTtkVK827gUkW7V52Vo1JWc37Uh_AG3aPkM |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+Visual+Attention+Detection+for+Mobile+Eye+Tracking+Using+Pre-Trained+Computer+Vision+Models+and+Human+Gaze&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Barz%2C+Michael&rft.au=Sonntag%2C+Daniel&rft.date=2021-06-16&rft.issn=1424-8220&rft.eissn=1424-8220&rft.volume=21&rft.issue=12&rft.spage=4143&rft_id=info:doi/10.3390%2Fs21124143&rft.externalDBID=n%2Fa&rft.externalDocID=10_3390_s21124143 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |