A Multi‐Model Approach for Attention Prediction in Gaming Environments for Autistic Children

ABSTRACT Autism spectrum disorder (ASD) is a neurological condition that affects an individual's mental development. This research work implements a multimodality input‐based virtual reality (VR)‐enabled attention prediction approach in gaming for children with autism. Initially, the multimodal...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 36; no. 1
Main Authors Valarmathi, P., Packialatha, A.
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.01.2025
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:ABSTRACT Autism spectrum disorder (ASD) is a neurological condition that affects an individual's mental development. This research work implements a multimodality input‐based virtual reality (VR)‐enabled attention prediction approach in gaming for children with autism. Initially, the multimodal inputs such as face image, electroencephalogram (EEG) signal, and data are individually processed by both the preprocessing and feature extraction procedures. Subsequently, a hybrid classification model with classifiers such as improved deep convolutional neural network (IDCNN) and long short term memory (LSTM) is utilized in expression detection by concatenating the resultant features obtained from the feature extraction procedure. Here, the conventional deep convolutional neural network (DCNN) approach is improved by a novel block‐knowledge‐based processing with a proposed sine‐hinge loss function. Finally, an improved weighted mutual information process is employed in attention prediction. Moreover, this proposed attention prediction model is analyzed by simulation and experimental analyses. The effectiveness of the proposed model is significantly proved by the experimental results obtained from various analyses. This research work proposes a multimodality input‐based Virtual Reality (VR)‐enabled attention prediction approach in gaming for children with autism. Initially, preprocessing is done on multimodal inputs such as face image, EEG signal and data by the median filter, Wiener filter and Min‐Max normalization techniques respectively. Then, the preprocessed image, preprocessed signal and preprocessed data undergo through feature extraction procedure and the relevant features are obtained. In this feature extraction procedure, features based on eye fixation and the Active Appearance Model (AAM) are extracted from preprocessed images. From the preprocessed signal, Improved Common Spatial Patterns (ICSP) and Stockwell transform‐based features are extracted. An improved Min‐Max normalization technique‐based features are extracted from the preprocessed data. The above two preprocessing and feature extraction procedures process each input independently. The resultant features obtained from the feature extraction procedure are concatenated into a feature set and it is applied as an input into the expression detection procedure. Using this feature set, a hybrid classification model (hybridized improved DCNN and LSTM) detects the expressions expressed by autistic children. The DCNN is improvised by novel block‐knowledge‐based processing and proposed sine‐hinge loss function. Finally, an improved weighted mutual information process is employed in the attention prediction procedure to obtain better outcomes in attention prediction.
Bibliography:The authors received no specific funding for this work.
Funding
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1546-4261
1546-427X
DOI:10.1002/cav.70010