MSCLF: A Multi-Subject Contrastive Learning Framework for Consistent EEG Representation in Stereoscopic Visual Discomfort Detection

Stereoscopic display terminal has garnered significant attention across various fields due to its unique immersive experience. However, the challenge of visual discomfort during viewing continues to hinder its broader adoption. The effective detection of stereoscopic visual discomfort is crucial for...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 74; pp. 1 - 15
Main Authors Lu, Na, Zhao, Xiaojie, Yao, Li
Format Journal Article
LanguageEnglish
Published New York IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Stereoscopic display terminal has garnered significant attention across various fields due to its unique immersive experience. However, the challenge of visual discomfort during viewing continues to hinder its broader adoption. The effective detection of stereoscopic visual discomfort is crucial for improving stereoscopic display devices and enhancing user experience. Electroencephalography (EEG) plays a pivotal role in the objective detection of stereoscopic visual discomfort by measuring the brain's response to various stimuli. Previous EEG-based studies have primarily focused on evaluating intrasubject stereoscopic visual discomfort, largely overlooking the impact of Multi-Subject variability and individual differences on EEG feature extraction. In addition, differences in individuals' perception of stereoscopic visual discomfort can lead to unreliable labels or unlabeled data across subjects, which negatively impacts detection performance. To address these challenges, we propose a Multi-Subject contrastive learning framework (MSCLF) for consistent EEG representation in stereoscopic visual discomfort detection. The framework integrates two core components. First, the subject similarity network (SSN) extracts subject-specific discriminative representations and assigns subject domain distribution weights to unlabeled data originating from different subjects. Second, the multidomain representation network (MDRN) extracts spatiotemporal features from various subjects and employs a joint-constrained contrastive loss (JCCL) to constrain the spatiotemporal features across different subjects, mitigating the impact of individual differences. Extensive experimental results demonstrate the effectiveness of our proposed MSCLF in mitigating the effects of individual differences, ultimately enhancing the accuracy of stereoscopic visual discomfort detection.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2025.3572965