CM-FusionNet: A cross-modal fusion fatigue detection method based on electroencephalogram and electrooculogram

Mental fatigue detection plays an important role in preventing fatigue-related diseases and reducing traffic accidents caused by mental exhaustion. In this effort, existing studies have presented interesting results by using physiological signals. However, most works focus primarily on single physio...

Full description

Saved in:
Bibliographic Details
Published inComputers & electrical engineering Vol. 123; p. 110204
Main Authors Huang, Fuzhong, Yang, Chunfeng, Weng, Wei, Chen, Zelong, Zhang, Zhenchang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mental fatigue detection plays an important role in preventing fatigue-related diseases and reducing traffic accidents caused by mental exhaustion. In this effort, existing studies have presented interesting results by using physiological signals. However, most works focus primarily on single physiological signal like electroencephalography (EEG). To address this gap, we propose an innovative cross-modal fusion method (CM-FusionNet) and conduct a multi-modal study using EEG and electrooculogram (EOG) for mental fatigue detection. Specifically, a variance channel attention (VCA) module is introduced to adaptively learn the optimal weights for each channel. Then, a Transformer fusion module is applied to extract and integrate the global features of EEG and EOG. Finally, we classify mental fatigue using the fused features. With this method, we conduct independent and cross-subject experiments on the public SEED-VIG dataset. The results of multi-modal experiment show an average accuracy of 84.62% and F1-score of 85.25%, an increase by 1.48% in accuracy 2.46% and in F1-score compared to the EOG-only experiment, and increase by 2.88% and 3.92% compared to the EEG-only experiment, respectively. This demonstrates the benefits of incorporating multi-modalities in fatigue detection and highlights the increased accuracy achieved with our CM-FusionNet approach. It also indicates that this method has potential for further exploration in the field of biomedical signal processing. •A novel cross-modal fusion multimodal classification method is proposed.•It combines local and global features for fatigue detection.•Variance Channel Attention enhances the model’s encoding capability.•Cross-modal Transformer explores the potential complementarity and correlations between modalities.•The proposed multimodal method outperforms unimodal methods.
ISSN:0045-7906
DOI:10.1016/j.compeleceng.2025.110204