DepNet: An automated industrial intelligent system using deep learning for video‐based depression analysis
As a common mental disorder, depression has attracted many researchers from affective computing field to estimate the depression severity. However, existing approaches based on Deep Learning (DL) are mainly focused on single facial image without considering the sequence information for predicting th...
Saved in:
Published in | International journal of intelligent systems Vol. 37; no. 7; pp. 3815 - 3835 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
John Wiley & Sons, Inc
01.07.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | As a common mental disorder, depression has attracted many researchers from affective computing field to estimate the depression severity. However, existing approaches based on Deep Learning (DL) are mainly focused on single facial image without considering the sequence information for predicting the depression scale. In this paper, an integrated framework, termed DepNet, for automatic diagnosis of depression that adopts facial images sequence from videos is proposed. Specifically, several pretrained models are adopted to represent the low‐level features, and Feature Aggregation Module is proposed to capture the high‐level characteristic information for depression analysis. More importantly, the discriminative characteristic of depression on faces can be mined to assist the clinicians to diagnose the severity of the depressed subjects. Multiscale experiments carried out on AVEC2013 and AVEC2014 databases have shown the excellent performance of the intelligent approach. The root mean‐square error between the predicted values and the Beck Depression Inventory‐II scores is 9.17 and 9.01 on the two databases, respectively, which are lower than those of the state‐of‐the‐art video‐based depression recognition methods. |
---|---|
Bibliography: | Prayag Tiwari, Rui Su, and Hari Mohan Pandey contributed equally to this study. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0884-8173 1098-111X |
DOI: | 10.1002/int.22704 |