Camera-Based HRV Prediction for Remote Learning Environments
In recent years, due to the widespread use of internet videos, remote photoplethysmography (rPPG) has gained more and more attention in the fields of affective computing. Restoring blood volume pulse (BVP) signals from facial videos is a challenging task that involves a series of preprocessing, imag...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
06.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In recent years, due to the widespread use of internet videos, remote
photoplethysmography (rPPG) has gained more and more attention in the fields of
affective computing. Restoring blood volume pulse (BVP) signals from facial
videos is a challenging task that involves a series of preprocessing, image
algorithms, and postprocessing to restore waveforms. Not only is the heart rate
metric utilized for affective computing, but the heart rate variability (HRV)
metric is even more significant. The challenge in obtaining HRV indices through
rPPG lies in the necessity for algorithms to precisely predict the BVP peak
positions. In this paper, we collected the Remote Learning Affect and
Physiology (RLAP) dataset, which includes over 32 hours of highly synchronized
video and labels from 58 subjects. This is a public dataset whose BVP labels
have been meticulously designed to better suit the training of HRV models.
Using the RLAP dataset, we trained a new model called Seq-rPPG, it is a model
based on one-dimensional convolution, and experimental results reveal that this
structure is more suitable for handling HRV tasks, which outperformed all other
baselines in HRV performance and also demonstrated significant advantages in
computational efficiency. |
---|---|
DOI: | 10.48550/arxiv.2305.04161 |