MFCC-based Recurrent Neural Network for automatic clinical depression recognition and assessment from speech
•A deep Recurrent Neural Network based framework for depression recognition from speech.•A robust approach that outperforms the state-of-art approaches on DAIC-WOZ dataset.•Fast, non-invasive and non-intruded approach, convenient for real-world applications.•Expanding training labels and transferred...
Saved in:
Published in | Biomedical signal processing and control Vol. 71; p. 103107 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.01.2022
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A deep Recurrent Neural Network based framework for depression recognition from speech.•A robust approach that outperforms the state-of-art approaches on DAIC-WOZ dataset.•Fast, non-invasive and non-intruded approach, convenient for real-world applications.•Expanding training labels and transferred features to overcome data scarcity.•Evaluation of the proposed approach under multi-modal and a multi-features experiments.
Clinical depression or Major Depressive Disorder (MDD) is a common and serious medical illness. In this paper, a deep Recurrent Neural Network-based framework is presented to detect depression and to predict its severity level from speech. Low-level and high-level audio features are extracted from audio recordings to predict the 24 scores of the Patient Health Questionnaire and the binary class of depression diagnosis. To overcome the problem of the small size of Speech Depression Recognition (SDR) datasets, expanding training labels and transferred features are considered. The proposed approach outperforms the state-of-art approaches on the DAIC-WOZ database with an overall accuracy of 76.27% and a root mean square error of 0.4 in assessing depression, while a root mean square error of 0.168 is achieved in predicting the depression severity levels. The proposed framework has several advantages (fastness, non-invasiveness, and non-intrusion), which makes it convenient for real-time applications. The performances of the proposed approach are evaluated under a multi-modal and a multi-features experiments. MFCC based high-level features hold relevant information related to depression. Yet, adding visual action units and different other acoustic features further boosts the classification results by 20% and 10% to reach an accuracy of 95.6% and 86%, respectively. Considering visual-facial modality needs to be carefully studied as it sparks patient privacy concerns while adding more acoustic features increases the computation time. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2021.103107 |