Prediction of Depression Severity Based on the Prosodic and Semantic Features With Bidirectional LSTM and Time Distributed CNN
Depression is increasingly impacting individuals both physically and psychologically worldwide. It has become a global major public health problem and attracts attention from various research fields. Traditionally, the diagnosis of depression is formulated through semi-structured interviews and supp...
Saved in:
Published in | IEEE transactions on affective computing Vol. 14; no. 3; pp. 2251 - 2265 |
---|---|
Main Authors | , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.07.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Depression is increasingly impacting individuals both physically and psychologically worldwide. It has become a global major public health problem and attracts attention from various research fields. Traditionally, the diagnosis of depression is formulated through semi-structured interviews and supplementary questionnaires, which makes the diagnosis heavily relying on physicians' experience and is subject to bias. However, since the pathogenic mechanism of depression is still under investigation, it is difficult for physicians to diagnose and treat, especially in the early clinical stage. As smart devices and artificial intelligence advance rapidly, understanding how depression associates with daily behaviors can be beneficial for the early stage depression diagnosis, which reduces labor costs and the likelihood of clinical mistakes as well as physicians bias. Furthermore, mental health monitoring and cloud-based remote diagnosis can be implemented through an automated depression diagnosis system. In this article, we propose an attention-based multimodality speech and text representation for depression prediction. Our model is trained to estimate the depression severity of participants using the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset. For the audio modality, we use the collaborative voice analysis repository (COVAREP) features provided by the dataset and employ a Bidirectional Long Short-Term Memory Network (Bi-LSTM) followed by a Time-distributed Convolutional Neural Network (T-CNN). For the text modality, we use global vectors for word representation (GloVe) to perform word embeddings and the embeddings are fed into the Bi-LSTM network. Results show that both audio and text models perform well on the depression severity estimation task, with best sequence level <inline-formula><tex-math notation="LaTeX">F_{1}</tex-math> <mml:math><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="chen-ieq1-3154332.gif"/> </inline-formula> score of 0.9870 and patient-level <inline-formula><tex-math notation="LaTeX">F_{1}</tex-math> <mml:math><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="chen-ieq2-3154332.gif"/> </inline-formula> score of 0.9074 for the audio model over five classes (healthy, mild, moderate, moderately severe, and severe), as well as sequence level <inline-formula><tex-math notation="LaTeX">F_{1}</tex-math> <mml:math><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="chen-ieq3-3154332.gif"/> </inline-formula> score of 0.9709 and patient-level <inline-formula><tex-math notation="LaTeX">F_{1}</tex-math> <mml:math><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="chen-ieq4-3154332.gif"/> </inline-formula> score of 0.9245 for the text model over five classes. Results are similar for the multimodality fused model, with the highest <inline-formula><tex-math notation="LaTeX">F_{1}</tex-math> <mml:math><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href="chen-ieq5-3154332.gif"/> </inline-formula> score of 0.9580 on the patient-level depression detection task over five classes. Experiments show statistically significant improvements over previous works. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1949-3045 1949-3045 |
DOI: | 10.1109/TAFFC.2022.3154332 |