I-MPN: Inductive Message Passing Network for Efficient Human-in-the-Loop Annotation of Mobile Eye Tracking Data
Comprehending how humans process visual information in dynamic settings is crucial for psychology and designing user-centered interactions. While mobile eye-tracking systems combining egocentric video and gaze signals can offer valuable insights, manual analysis of these recordings is time-intensive...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
07.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Comprehending how humans process visual information in dynamic settings is crucial for psychology and designing user-centered interactions. While mobile eye-tracking systems combining egocentric video and gaze signals can offer valuable insights, manual analysis of these recordings is time-intensive. In this work, we present a novel human-centered learning algorithm designed for automated object recognition within mobile eye-tracking settings. Our approach seamlessly integrates an object detector with a spatial relation-aware inductive message-passing network (I-MPN), harnessing node profile information and capturing object correlations. Such mechanisms enable us to learn embedding functions capable of generalizing to new object angle views, facilitating rapid adaptation and efficient reasoning in dynamic contexts as users navigate their environment. Through experiments conducted on three distinct video sequences, our interactive-based method showcases significant performance improvements over fixed training/testing algorithms, even when trained on considerably smaller annotated samples collected through user feedback. Furthermore, we demonstrate exceptional efficiency in data annotation processes and surpass prior interactive methods that use complete object detectors, combine detectors with convolutional networks, or employ interactive video segmentation. |
---|---|
AbstractList | Comprehending how humans process visual information in dynamic settings is crucial for psychology and designing user-centered interactions. While mobile eye-tracking systems combining egocentric video and gaze signals can offer valuable insights, manual analysis of these recordings is time-intensive. In this work, we present a novel human-centered learning algorithm designed for automated object recognition within mobile eye-tracking settings. Our approach seamlessly integrates an object detector with a spatial relation-aware inductive message-passing network (I-MPN), harnessing node profile information and capturing object correlations. Such mechanisms enable us to learn embedding functions capable of generalizing to new object angle views, facilitating rapid adaptation and efficient reasoning in dynamic contexts as users navigate their environment. Through experiments conducted on three distinct video sequences, our interactive-based method showcases significant performance improvements over fixed training/testing algorithms, even when trained on considerably smaller annotated samples collected through user feedback. Furthermore, we demonstrate exceptional efficiency in data annotation processes and surpass prior interactive methods that use complete object detectors, combine detectors with convolutional networks, or employ interactive video segmentation. |
Author | Le, Hoang H Nguyen, Binh T Barz, Michael Ngo, Thinh P Kopacsi, Laszlo Nguyen, Duy M H Sonntag, Daniel Omair Shahzad Bhatti |
Author_xml | – sequence: 1 givenname: Hoang surname: Le middlename: H fullname: Le, Hoang H – sequence: 2 givenname: Duy surname: Nguyen middlename: M H fullname: Nguyen, Duy M H – sequence: 3 fullname: Omair Shahzad Bhatti – sequence: 4 givenname: Laszlo surname: Kopacsi fullname: Kopacsi, Laszlo – sequence: 5 givenname: Thinh surname: Ngo middlename: P fullname: Ngo, Thinh P – sequence: 6 givenname: Binh surname: Nguyen middlename: T fullname: Nguyen, Binh T – sequence: 7 givenname: Michael surname: Barz fullname: Barz, Michael – sequence: 8 givenname: Daniel surname: Sonntag fullname: Sonntag, Daniel |
BookMark | eNqNjMtqAjEUQINU0Kr_cKHrwDQx4-BOdEShIy7cSxxvND7utUmmxb9XoR_Q1Vmcw3kXb8SELdFVWn_KYqhURwxiPGVZpvKRMkZ3BS9ltV6NYUn7pk7-B6HCGO0BYW1j9HSAFaZfDmdwHKB0ztceKcGiuVqSnmQ6ovxivsGEiJNNngnYQcU7f0Eo7wibYOvz6zSzyfZF29lLxMEfe-JjXm6mC3kL_N1gTNsTN4GeaquzPDeF0abQ_6seIy5KnA |
ContentType | Paper |
Copyright | 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central ProQuest Central Essentials AUTh Library subscriptions: ProQuest Central Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_30665853583 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 23:03:43 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_30665853583 |
OpenAccessLink | https://www.proquest.com/docview/3066585358?pq-origsite=%requestingapplication% |
PQID | 3066585358 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_3066585358 |
PublicationCentury | 2000 |
PublicationDate | 20240707 |
PublicationDateYYYYMMDD | 2024-07-07 |
PublicationDate_xml | – month: 07 year: 2024 text: 20240707 day: 07 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2024 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.5558884 |
SecondaryResourceType | preprint |
Snippet | Comprehending how humans process visual information in dynamic settings is crucial for psychology and designing user-centered interactions. While mobile... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Algorithms Annotations Detectors Eye movements Machine learning Message passing Object recognition Tracking systems |
Title | I-MPN: Inductive Message Passing Network for Efficient Human-in-the-Loop Annotation of Mobile Eye Tracking Data |
URI | https://www.proquest.com/docview/3066585358 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1NS8QwEB10i-DNT_xYlwG9Btm227RexI-uq9hSRGFvS5Om4qXpbuvBi7_dSbarB2GPIRCSkMx7efPIAFwUpVSikAGLCpNmlIHHBFeClZ4oZR755kMR47ZIg8mb_zQdTTvBrelslauYaAN1oaXRyC89kyMgbBmF1_WcmapRJrvaldDYBGfocm5OdTh--NVY3IATY_b-hVmLHeMdcLK8Votd2FDVHmxZy6Vs9kE_siRLr9AUz7BBBxNTjuRdYUaElhAF06VFG4lXYmy_eiCEQCu7s4-KEXVjz1rXeFNVeplRR11iogXddIy_FBIQSSOF433e5gdwPo5f7yZsNctZd46a2d-qvUPoVbpSR4AuPUN4GYjAD-lxFcmIOI2IuBzKiJo-P4b-upFO1nefwrZLwG0tqbwPvXbxqc4IeFsxsLs7AOc2TrMXaiXf8Q8LCo4g |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1NS8NAEB20QfTmJ35UHdDrIjZp0ngRP1JSTUKQCr2F7GZTvGRjEw_-e2e3qR6EHpfAkoTdeW_ePGYArotSSF4Il_mFLjMK12bck5yVNi9F7ju6oYh2WyRu-O68zIazTnBrOlvlKiaaQF0ooTXyG1vXCAhbhqP7-pPpqVG6utqN0NgES7eqolNtPQZJ-varsgxcjziz_S_QGvQY74KV5rVc7MGGrPZhy5guRXMAasLiNLlDPT7DhB2M9UCSucSUKC1hCiZLkzYSs8TANHsgjEAjvLOPihF5Y5FSNT5UlVrW1FGVGCtOdx2Db4kERUKL4fict_khXI2D6VPIVm-ZdSepyf6-2z6CXqUqeQw4oETEK13uOiNKr3zhE6vhviduhU9LxzuB_rqdTtc_voTtcBpHWTRJXs9gZ0AwbgyqXh967eJLnhMMt_yi-9c_bWOPpg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=I-MPN%3A+Inductive+Message+Passing+Network+for+Efficient+Human-in-the-Loop+Annotation+of+Mobile+Eye+Tracking+Data&rft.jtitle=arXiv.org&rft.au=Le%2C+Hoang+H&rft.au=Nguyen%2C+Duy+M+H&rft.au=Omair+Shahzad+Bhatti&rft.au=Kopacsi%2C+Laszlo&rft.date=2024-07-07&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |