FaceGuard: A Wearable System To Avoid Face Touching

Most people touch their faces unconsciously, for instance to scratch an itch or to rest one's chin in their hands. To reduce the spread of the novel coronavirus (COVID-19), public health officials recommend against touching one's face, as the virus is transmitted through mucous membranes i...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in robotics and AI Vol. 8; p. 612392
Main Authors Michelin, Allan Michael, Korres, Georgios, Ba'ara, Sara, Assadi, Hadi, Alsuradi, Haneen, Sayegh, Rony R, Argyros, Antonis, Eid, Mohamad
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 08.04.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most people touch their faces unconsciously, for instance to scratch an itch or to rest one's chin in their hands. To reduce the spread of the novel coronavirus (COVID-19), public health officials recommend against touching one's face, as the virus is transmitted through mucous membranes in the mouth, nose and eyes. Students, office workers, medical personnel and people on trains were found to touch their faces between 9 and 23 times per hour. This paper introduces FaceGuard, a system that utilizes deep learning to predict hand movements that result in touching the face, and provides sensory feedback to stop the user from touching the face. The system utilizes an inertial measurement unit (IMU) to obtain features that characterize hand movement involving face touching. Time-series data can be efficiently classified using 1D-Convolutional Neural Network (CNN) with minimal feature engineering; 1D-CNN filters automatically extract temporal features in IMU data. Thus, a 1D-CNN based prediction model is developed and trained with data from 4,800 trials recorded from 40 participants. Training data are collected for hand movements involving face touching during various everyday activities such as sitting, standing, or walking. Results showed that while the average time needed to touch the face is 1,200 ms, a prediction accuracy of more than 92% is achieved with less than 550 ms of IMU data. As for the sensory response, the paper presents a psychophysical experiment to compare the response time for three sensory feedback modalities, namely visual, auditory, and vibrotactile. Results demonstrate that the response time is significantly smaller for vibrotactile feedback (427.3 ms) compared to visual (561.70 ms) and auditory (520.97 ms). Furthermore, the success rate (to avoid face touching) is also statistically higher for vibrotactile and auditory feedback compared to visual feedback. These results demonstrate the feasibility of predicting a hand movement and providing timely sensory feedback within less than a second in order to avoid face touching.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Zhihan Lv, Qingdao University, China
Tommaso Lisini Baldi, University of Siena, Italy
Domen Novak, University of Wyoming, United States
This article was submitted to Biomedical Robotics, a section of the journal Frontiers in Robotics and AI
Edited by: Ana Luisa Trejos, Western University, Canada
ISSN:2296-9144
2296-9144
DOI:10.3389/frobt.2021.612392