A multi-modal open dataset for mental-disorder analysis

According to the WHO, the number of mental disorder patients, especially depression patients, has overgrown and become a leading contributor to the global burden of disease. With the rising of tools such as artificial intelligence, using physiological data to explore new possible physiological indic...

Full description

Saved in:
Bibliographic Details
Published inScientific data Vol. 9; no. 1; p. 178
Main Authors Cai, Hanshu, Yuan, Zhenqin, Gao, Yiwen, Sun, Shuting, Li, Na, Tian, Fuze, Xiao, Han, Li, Jianxiu, Yang, Zhengwu, Li, Xiaowei, Zhao, Qinglin, Liu, Zhenyu, Yao, Zhijun, Yang, Minqiang, Peng, Hong, Zhu, Jing, Zhang, Xiaowei, Gao, Guoping, Zheng, Fang, Li, Rui, Guo, Zhihua, Ma, Rong, Yang, Jing, Zhang, Lan, Hu, Xiping, Li, Yumin, Hu, Bin
Format Journal Article
LanguageEnglish
Published England Nature Publishing Group 19.04.2022
Nature Publishing Group UK
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:According to the WHO, the number of mental disorder patients, especially depression patients, has overgrown and become a leading contributor to the global burden of disease. With the rising of tools such as artificial intelligence, using physiological data to explore new possible physiological indicators of mental disorder and creating new applications for mental disorder diagnosis has become a new research hot topic. We present a multi-modal open dataset for mental-disorder analysis. The dataset includes EEG and recordings of spoken language data from clinically depressed patients and matching normal controls, who were carefully diagnosed and selected by professional psychiatrists in hospitals. The EEG dataset includes data collected using a traditional 128-electrodes mounted elastic cap and a wearable 3-electrode EEG collector for pervasive computing applications. The 128-electrodes EEG signals of 53 participants were recorded as both in resting state and while doing the Dot probe tasks; the 3-electrode EEG signals of 55 participants were recorded in resting-state; the audio data of 52 participants were recorded during interviewing, reading, and picture description.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ObjectType-Undefined-3
ISSN:2052-4463
2052-4463
DOI:10.1038/s41597-022-01211-x