Supervised Learning and Large Language Model Benchmarks on Mental Health Datasets: Cognitive Distortions and Suicidal Risks in Chinese Social Media

On social media, users often express their personal feelings, which may exhibit cognitive distortions or even suicidal tendencies on certain specific topics. Early recognition of these signs is critical for effective psychological intervention. In this paper, we introduce two novel datasets from Chi...

Full description

Saved in:
Bibliographic Details
Published inBioengineering (Basel) Vol. 12; no. 8; p. 882
Main Authors Qi, Hongzhi, Fu, Guanghui, Li, Jianqiang, Song, Changwei, Zhai, Wei, Luo, Dan, Liu, Shuo, Yu, Yijing, Yang, Bingxiang, Zhao, Qing
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 19.08.2025
MDPI
Subjects
Online AccessGet full text
ISSN2306-5354
2306-5354
DOI10.3390/bioengineering12080882

Cover

Loading…
More Information
Summary:On social media, users often express their personal feelings, which may exhibit cognitive distortions or even suicidal tendencies on certain specific topics. Early recognition of these signs is critical for effective psychological intervention. In this paper, we introduce two novel datasets from Chinese social media: SOS-HL-1K for suicidal risk classification, which contains 1249 posts, and SocialCD-3K, a multi-label classification dataset for cognitive distortion detection that contains 3407 posts. We conduct a comprehensive evaluation using two supervised learning methods and eight large language models (LLMs) on the proposed datasets. From the prompt engineering perspective, we experiment with two types of prompt strategies, including four zero-shot and five few-shot strategies. We also evaluate the performance of the LLMs after fine-tuning on the proposed tasks. Experimental results show a significant performance gap between prompted LLMs and supervised learning. Our best supervised model achieves strong results, with an F1-score of 82.76% for the high-risk class in the suicide task and a micro-averaged F1-score of 76.10% for the cognitive distortion task. Without fine-tuning, the best-performing LLM lags by 6.95 percentage points in the suicide task and a more pronounced 31.53 points in the cognitive distortion task. Fine-tuning substantially narrows this performance gap to 4.31% and 3.14% for the respective tasks. While this research highlights the potential of LLMs in psychological contexts, it also shows that supervised learning remains necessary for more challenging tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2306-5354
2306-5354
DOI:10.3390/bioengineering12080882