Capturing spatio-temporal patterns of falls individuals using efficient graph convolutional network model Capturing spatio-temporal patterns of falls individuals using efficient graph convolutional network model

Falls are a major worldwide health concern among people, and the ability to detect and prevent falls can have significant implications for their safety and well-being. This paper uses an Efficient-Graph Convolutional Network (Efficient-GCN) model to extract discriminative features of fall actions. T...

Full description

Saved in:
Bibliographic Details
Published inApplied intelligence (Dordrecht, Netherlands) Vol. 55; no. 11; p. 825
Main Authors Guendoul, Oumaima, Zobi, Maryem, Ait Abdelali, Hamd, Tabii, Youness, Oulad Haj Thami, Rachid, Bourja, Omar
Format Journal Article
LanguageEnglish
Published New York Springer US 01.07.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Falls are a major worldwide health concern among people, and the ability to detect and prevent falls can have significant implications for their safety and well-being. This paper uses an Efficient-Graph Convolutional Network (Efficient-GCN) model to extract discriminative features of fall actions. The proposed model is designed to handle the complex and dynamic nature of human movements during a fall event. The main problem in fall events is to capture spatiotemporal information that results from falls, plus the insufficient data size for training. To address this problem, we suggest a protocol to collect a fall dataset. The Kinect camera is used to collect skeleton data, which is then processed using the Efficient-Graph Convolutional Network (Efficient-GCN) algorithm to identify fall individual patterns. We present a comparative study between three methods Efficient-Graph Convolutional Network (Efficient-GCN), Support Vector machine (SVM), and k-nearest neighbor (KNN) for improving skeletal-based fall detection and deep convolutional neural network (DCNN) for depth data. To have a more global view we compare our results with public dataset on the three baselines variant noted as Baseline coefficient (Bx) where “x” denotes scaling coefficient, where Efficient-Graph Convolutional Network Baseline with coefficient 2 (Efficient-GCN-B2) on our collected dataset outperforms achieving 98,50% accuracy on the cross-subject. The Efficient-Graph Convolutional Network with coefficient 2 (Efficient-GCN-B2) algorithm achieves remarkably satisfactory results in detecting fall events on the robust representation which is a skeleton and Deep Convolutional Neural Network (DCNN) attains 97% on depth data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-025-06316-5