Visualising and quantifying relevant parkinsonian gait patterns using 3D convolutional network

[Display omitted] •A markerless strategy to support diagnosis of Parkinson’s disease from gait videos.•Spatiotemporal saliency maps that stand out parkinsonian patterns in gait videos.•The model highlights spatiotemporal patterns of legs during the single support phase.•The approach achieves 94.89%...

Full description

Saved in:
Bibliographic Details
Published inJournal of biomedical informatics Vol. 123; p. 103935
Main Authors Guayacán, Luis C., Martínez, Fabio
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.11.2021
Subjects
Online AccessGet full text
ISSN1532-0464
1532-0480
1532-0480
DOI10.1016/j.jbi.2021.103935

Cover

More Information
Summary:[Display omitted] •A markerless strategy to support diagnosis of Parkinson’s disease from gait videos.•Spatiotemporal saliency maps that stand out parkinsonian patterns in gait videos.•The model highlights spatiotemporal patterns of legs during the single support phase.•The approach achieves 94.89% of accuracy in Parkinson’s disease classification. Parkinson’s disease (PD) lacks a definitive diagnosis, with the observation of motion patterns being the main method of characterizing disease progression and planning patient treatments. Among PD observations, gait motion patterns, such as step length, flexed posture, and bradykinesia, support the characterization of disease progression. However, this analysis is usually performed with marker-based protocols, which affect the gait and localized segment patterns during locomotion. This work introduces a 3D convolutional gait representation for automatic PD classification that identifies the spatio-temporal patterns used for classification. This approach allows us to obtain an explainable model that classifies markerless sequences and describes the main learned spatio-temporal regions associated with abnormal patterns in a particular video. Initially, a spatio-temporal convolutional network is trained from a set of raw videos and optical flow fields. Then, a PD prediction is obtained from the motion patterns learned by the trained model. Finally, saliency maps, which highlight abnormal motion patterns, are obtained by retro-propagating the output prediction up to the input volume through two different stages: an embedded back-tracking and a pseudo-deconvolution process. From a total of 176 videos from 22 patients, the resulting salient maps highlight lower limb patterns possibly related to step length and speed. In control subjects, the saliency maps highlight the head and trunk posture. The proposed approach achieved an average accuracy score of 94.89%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1532-0464
1532-0480
1532-0480
DOI:10.1016/j.jbi.2021.103935