Intelligent system for depression scale estimation with facial expressions and case study in industrial intelligence

As a mental disorder, depression has affected people's lives, works, and so on. Researchers have proposed various industrial intelligent systems in the pattern recognition field for audiovisual depression detection. This paper presents an end‐to‐end trainable intelligent system to generate high...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of intelligent systems Vol. 37; no. 12; pp. 10140 - 10156
Main Authors He, Lang, Guo, Chenguang, Tiwari, Prayag, Pandey, Hari Mohan, Dang, Wei
Format Journal Article
LanguageEnglish
Published New York Hindawi Limited 01.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As a mental disorder, depression has affected people's lives, works, and so on. Researchers have proposed various industrial intelligent systems in the pattern recognition field for audiovisual depression detection. This paper presents an end‐to‐end trainable intelligent system to generate high‐level representations over the entire video clip. Specifically, a three‐dimensional (3D) convolutional neural network equipped with a module spatiotemporal feature aggregation module (STFAM) is trained from scratch on audio/visual emotion challenge (AVEC)2013 and AVEC2014 data, which can model the discriminative patterns closely related to depression. In the STFAM, channel and spatial attention mechanism and an aggregation method, namely 3D DEP‐NetVLAD, are integrated to learn the compact characteristic based on the feature maps. Extensive experiments on the two databases (i.e., AVEC2013 and AVEC2014) are illustrated that the proposed intelligent system can efficiently model the underlying depression patterns and obtain better performances over the most video‐based depression recognition approaches. Case studies are presented to describes the applicability of the proposed intelligent system for industrial intelligence.
Bibliography:Lang He, Chenguang Guo, and Prayag Tiwari contributed equally to this study.
ISSN:0884-8173
1098-111X
DOI:10.1002/int.22426