Multi-Head Self-Attention Model for Classification of Temporal Lobe Epilepsy Subtypes

As a long-standing chronic disease, Temporal Lobe Epilepsy (TLE), resulting from abnormal discharges of neurons and characterized by recurrent episodic central nervous system dysfunctions, has affected more than 70% of drug-resistant epilepsy patients across the world. As the etiology and clinical s...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in physiology Vol. 11; p. 604764
Main Authors Gu, Peipei, Wu, Ting, Zou, Mingyang, Pan, Yijie, Guo, Jiayang, Xiahou, Jianbing, Peng, Xueping, Li, Hailong, Ma, Junxia, Zhang, Ling
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 27.11.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As a long-standing chronic disease, Temporal Lobe Epilepsy (TLE), resulting from abnormal discharges of neurons and characterized by recurrent episodic central nervous system dysfunctions, has affected more than 70% of drug-resistant epilepsy patients across the world. As the etiology and clinical symptoms are complicated, differential diagnosis of TLE mainly relies on experienced clinicians, and specific diagnostic biomarkers remain unclear. Though great effort has been made regarding the genetics, pathology, and neuroimaging of TLE, an accurate and effective diagnosis of TLE, especially the TLE subtypes, remains an open problem. It is of a great importance to explore the brain network of TLE, since it can provide the basis for diagnoses and treatments of TLE. To this end, in this paper, we proposed a multi-head self-attention model (MSAM). By integrating the self-attention mechanism and multilayer perceptron method, the MSAM offers a promising tool to enhance the classification of TLE subtypes. In comparison with other approaches, including convolutional neural network (CNN), support vector machine (SVM), and random forest (RF), experimental results on our collected MEG dataset show that the MSAM achieves a supreme performance of 83.6% on accuracy, 90.9% on recall, 90.7% on precision, and 83.4% on F1-score, which outperforms its counterparts. Furthermore, effectiveness of varying head numbers of multi-head self-attention is assessed, which helps select the optimal number of multi-head. The self-attention aspect learns the weights of different signal locations which can effectively improve classification accuracy. In addition, the robustness of MSAM is extensively assessed with various ablation tests, which demonstrates the effectiveness and generalizability of the proposed approach.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Xin Gao, King Abdullah University of Science and Technology, Saudi Arabia
Reviewed by: Wei Chen, Chinese Academy of Agricultural Sciences, China; Chen Yang, Tsinghua University, China; Chongyang Shi, Beijing Institute of Technology, China
These authors have contributed equally to this work
This article was submitted to Computational Physiology and Medicine, a section of the journal Frontiers in Physiology
ISSN:1664-042X
1664-042X
DOI:10.3389/fphys.2020.604764