Cross-Modal Attention for Multimodal Information Fusion: A Novel Approach to Attention Deficit Hyperactivity Disorder Detection

This paper presents a novel method for differentiating Attention Deficit Hyperactivity Disorder subjects from control participants by multimodal data fusion, including video observations and questionnaire responses. By exploiting the well known Video Vision Transformer model, we analyse the video mo...

Full description

Saved in:
Bibliographic Details
Published in2024 27th International Conference on Information Fusion (FUSION) pp. 1 - 6
Main Authors Nash, Christian, Nair, Rajesh, Naqvi, Syed Mohsen
Format Conference Proceeding
LanguageEnglish
Published ISIF 08.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract This paper presents a novel method for differentiating Attention Deficit Hyperactivity Disorder subjects from control participants by multimodal data fusion, including video observations and questionnaire responses. By exploiting the well known Video Vision Transformer model, we analyse the video modality to identify the complex spatial-temporal information of ADHD symptoms. Simultaneously, a Multi-Layer Perceptron model is applied to evaluate structured questionnaire data by capturing key cognitive and emotional indicators of the ADHD symptoms. To fuse the two modalities, a cross-modal attention mechanism assigns adaptive weights to each feature based on its classification relevance. The targeted weighting significantly refines the proposed model's decision-making capability by concentrating on the most critical elements of the aggregated information. For training and testing, our novel Multimodal ADHD dataset recorded under the Intelligent Sensing ADHD Trial in collaboration with Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust UK is evaluated. The proposed model, ADViQ-AL achieves a 98.18% classification accuracy, 97.83% sensitivity, and 98.53% specificity in classifying ADHD and control groups.
AbstractList This paper presents a novel method for differentiating Attention Deficit Hyperactivity Disorder subjects from control participants by multimodal data fusion, including video observations and questionnaire responses. By exploiting the well known Video Vision Transformer model, we analyse the video modality to identify the complex spatial-temporal information of ADHD symptoms. Simultaneously, a Multi-Layer Perceptron model is applied to evaluate structured questionnaire data by capturing key cognitive and emotional indicators of the ADHD symptoms. To fuse the two modalities, a cross-modal attention mechanism assigns adaptive weights to each feature based on its classification relevance. The targeted weighting significantly refines the proposed model's decision-making capability by concentrating on the most critical elements of the aggregated information. For training and testing, our novel Multimodal ADHD dataset recorded under the Intelligent Sensing ADHD Trial in collaboration with Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust UK is evaluated. The proposed model, ADViQ-AL achieves a 98.18% classification accuracy, 97.83% sensitivity, and 98.53% specificity in classifying ADHD and control groups.
Author Nash, Christian
Nair, Rajesh
Naqvi, Syed Mohsen
Author_xml – sequence: 1
  givenname: Christian
  surname: Nash
  fullname: Nash, Christian
  email: c.nash@newcastle.ac.uk
  organization: Newcastle University,Intelligent Sensing and Communications Research Group,UK
– sequence: 2
  givenname: Rajesh
  surname: Nair
  fullname: Nair, Rajesh
  email: rajesh.nair@cntw.nhs.uk
  organization: Tyne and Wear, NHS Foundation Trust,Adult ADHD Services,Cumbria,Northumberland
– sequence: 3
  givenname: Syed Mohsen
  surname: Naqvi
  fullname: Naqvi, Syed Mohsen
  email: mohsen.naqvi@newcastle.ac.uk
  organization: Newcastle University,Intelligent Sensing and Communications Research Group,UK
BookMark eNpNkD1PwzAYhI0EA5T-AwaLPcVfyWuzVS2hlfoxQOfKdd4IS20cOW6lTvx1QgGJ6aR77m64O3LdhAYJeeRsJKTh5qncvM3Xq9wYrUeCCTXiDFghNb8iQwOagwRQBgpzSz4nMXRdtgyV3dNxStgkHxpah0iXx33yhwuYN71xsBdUHrtenumYrsIJ-1LbxmDdB03h38AUa-98orNzi9G65E8-nenUdyFWGHuc0H0H78lNbfcdDn91QDbly_tkli3Wr_PJeJF5DkXKEDnmO-BMYyWlVVDnRjgllNWwy6FinDuBBetd7AtKgVY6dw7ETitkUg7Iw8-uR8RtG_3BxvP27xf5BYoCX5w
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.23919/FUSION59988.2024.10706381
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9781737749769
1737749769
EndPage 6
ExternalDocumentID 10706381
Genre orig-research
GroupedDBID 6IE
6IL
CBEJK
RIE
RIL
ID FETCH-LOGICAL-i176t-ee1e5b7108ed33a47f592c424a87b57d011c2e60592e1764478485cc72b84e033
IEDL.DBID RIE
IngestDate Wed Oct 16 05:58:50 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i176t-ee1e5b7108ed33a47f592c424a87b57d011c2e60592e1764478485cc72b84e033
PageCount 6
ParticipantIDs ieee_primary_10706381
PublicationCentury 2000
PublicationDate 2024-July-8
PublicationDateYYYYMMDD 2024-07-08
PublicationDate_xml – month: 07
  year: 2024
  text: 2024-July-8
  day: 08
PublicationDecade 2020
PublicationTitle 2024 27th International Conference on Information Fusion (FUSION)
PublicationTitleAbbrev FUSION
PublicationYear 2024
Publisher ISIF
Publisher_xml – name: ISIF
Score 1.8782016
Snippet This paper presents a novel method for differentiating Attention Deficit Hyperactivity Disorder subjects from control participants by multimodal data fusion,...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Accuracy
Adaptation models
Analytical models
Attention Deficit Hyperactivity Disorder
Attention mechanisms
Data mining
Data models
Deep Learning
Feature extraction
Machine Learning
Mental Health
Multimodal
Training
Transformers
Visualization
Title Cross-Modal Attention for Multimodal Information Fusion: A Novel Approach to Attention Deficit Hyperactivity Disorder Detection
URI https://ieeexplore.ieee.org/document/10706381
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEA66J08qrvgmB6-tNo9N6m1ZLYvg4sGFvS1JOguitiKpBy_-dSdp6wsEb6Vh0pJpOvMl3zch5BTACAk4ARHyjBJh0Be5yVyiQPFcG445bCTIzkbTubheyEUnVo9aGACI5DNIw2Xcyy9r14SlMpzhKkRYBDvriNxasVZbSJTxPMvPinn4AUlEEIG0xUTaG_w4OiVGjmKTzPpntoSRh7TxNnVvv8ox_vultsjwS6RHbz_DzzZZg2qHvE9C1Etu6tI80rH3LZmRYmZKo9T2KTZ0GqTYVDRhveyCjumsfgU06oqMU19_6-ASQqUJT6cIW6OuKhw5QfvSndjsI6WrGpJ5cXU3mSbdGQvJfaZGPgHIQFpMMzSUnBuhVjJnTjD0mrJSlTj9HQPEPDkDNBBCaaGlc4pZLeCc810yqOoK9ghlypUSVLZCECXMKrfYgdYWEyz8DjIN-2QYRm_53JbRWPYDd_DH_UOyEZwYubH6iAz8SwPHmAF4exI9_wHbKLJS
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZQGWACRBFvPLAmEMeuHbaqEAVoI4ZW6lYlzlVCQIKQw8DCX-fsJLwkJLYo1jmRz_bd2d93R8gpQMYF4ALEkGfg8Qx1EWWB9iTIMFJZiD6sA8img2TGb-Zi3pLVHRcGABz4DHz76O7yi0rX9qgMV7i0FhaDnVU0_II1dK0mlSgLoyA6i2d2CxIYQ1jYFuN-J_KjeIqzHfEGSbuvNpCRB782ua_ffiVk_PdvbZL-F02P3n0aoC2yAuU2eR9Zu-dNqiJ7pENjGjgjRd-UOrLtk2toWUiuKa7tidkFHdK0egUUatOMU1N96-ASbK4JQxMMXB2zyhadoF3yTmw2DtRV9sksvpqOEq-tsuDdB3JgPIAARI6OhoIiDDMulyJimjPUm8yFLHAD0Aww6okYoADnUnEltJYsVxzOw3CH9MqqhF1CmdSFABksMYzi2TLKsQOlcnSxcCYECvZI347e4rlJpLHoBm7_j_cnZC2ZTsaL8XV6e0DWrUIdUlYdkp55qeEI_QGTH7tZ8AFVzbWc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2024+27th+International+Conference+on+Information+Fusion+%28FUSION%29&rft.atitle=Cross-Modal+Attention+for+Multimodal+Information+Fusion%3A+A+Novel+Approach+to+Attention+Deficit+Hyperactivity+Disorder+Detection&rft.au=Nash%2C+Christian&rft.au=Nair%2C+Rajesh&rft.au=Naqvi%2C+Syed+Mohsen&rft.date=2024-07-08&rft.pub=ISIF&rft.spage=1&rft.epage=6&rft_id=info:doi/10.23919%2FFUSION59988.2024.10706381&rft.externalDocID=10706381