Taking a Deeper Look at the Brain: Predicting Visual Perceptual and Working Memory Load From High-Density fNIRS Data
Predicting workload using physiological sensors has taken on a diffuse set of methods in recent years. However, the majority of these methods train models on small datasets, with small numbers of channel locations on the brain, limiting a model's ability to transfer across participants, tasks,...
Saved in:
Published in | IEEE journal of biomedical and health informatics Vol. 26; no. 5; pp. 2308 - 2319 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.05.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2168-2194 2168-2208 2168-2208 |
DOI | 10.1109/JBHI.2021.3133871 |
Cover
Summary: | Predicting workload using physiological sensors has taken on a diffuse set of methods in recent years. However, the majority of these methods train models on small datasets, with small numbers of channel locations on the brain, limiting a model's ability to transfer across participants, tasks, or experimental sessions. In this paper, we introduce a new method of modeling a large, cross-participant and cross-session set of high density functional near infrared spectroscopy (fNIRS) data by using an approach grounded in cognitive load theory and employing a Bi-Directional Gated Recurrent Unit (BiGRU) incorporating attention mechanism and self-supervised label augmentation (SLA). We show that our proposed CNN-BiGRU-SLA model can learn and classify different levels of working memory load (WML) and visual processing load (VPL) across participants. Importantly, we leverage a multi-label classification scheme, where our models are trained to predict simultaneously occurring levels of WML and VPL. We evaluate our model using leave-one-participant-out (LOOCV) as well as 10-fold cross validation. Using LOOCV, for binary classification (off/on), we reached an F1-score of 0.9179 for WML and 0.8907 for VPL across 22 participants (each participant did 2 sessions). For multi-level (off, low, high) classification, we reached an F1-score of 0.7972 for WML and 0.7968 for VPL. Using 10-fold cross validation, for multi-level classification, we reached an F1-score of 0.7742 for WML and 0.7741 for VPL. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 2168-2194 2168-2208 2168-2208 |
DOI: | 10.1109/JBHI.2021.3133871 |