Automated Quantification of Eye Tics Using Computer Vision and Deep Learning Techniques

Background Tourette syndrome (TS) tics are typically quantified using “paper and pencil” rating scales that are susceptible to factors that adversely impact validity. Video‐based methods to more objectively quantify tics have been developed but are challenged by reliance on human raters and procedur...

Full description

Saved in:
Bibliographic Details
Published inMovement disorders Vol. 39; no. 1; pp. 183 - 191
Main Authors Conelea, Christine, Liang, Hengyue, DuBois, Megan, Raab, Brittany, Kellman, Mia, Wellen, Brianna, Jacob, Suma, Wang, Sonya, Sun, Ju, Lim, Kelvin
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.01.2024
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background Tourette syndrome (TS) tics are typically quantified using “paper and pencil” rating scales that are susceptible to factors that adversely impact validity. Video‐based methods to more objectively quantify tics have been developed but are challenged by reliance on human raters and procedures that are resource intensive. Computer vision approaches that automate detection of atypical movements may be useful to apply to tic quantification. Objective The current proof‐of‐concept study applied a computer vision approach to train a supervised deep learning algorithm to detect eye tics in video, the most common tic type in patients with TS. Methods Videos (N = 54) of 11 adolescent patients with TS were rigorously coded by trained human raters to identify 1.5‐second clips depicting “eye tic events” (N = 1775) and “non‐tic events” (N = 3680). Clips were encoded into three‐dimensional facial landmarks. Supervised deep learning was applied to processed data using random split and disjoint split regimens to simulate model validity under different conditions. Results Area under receiver operating characteristic curve was 0.89 for the random split regimen, indicating high accuracy in the algorithm's ability to properly classify eye tic vs. non–eye tic movements. Area under receiver operating characteristic curve was 0.74 for the disjoint split regimen, suggesting that algorithm generalizability is more limited when trained on a small patient sample. Conclusions The algorithm was successful in detecting eye tics in unseen validation sets. Automated tic detection from video is a promising approach for tic quantification that may have future utility in TS screening, diagnostics, and treatment outcome measurement. © 2023 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.
Bibliography:Full financial disclosures and author roles may be found in the online version of this article.
This work was supported by the National Institute of Mental Health (R61 MH123754 and T32 DA037183), Minnesota Robotics Institute, and the National Science Foundation (1919631).
Relevant conflicts of interest/financial disclosures
The authors report no disclosures related to the research in this article.
Funding agencies
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0885-3185
1531-8257
DOI:10.1002/mds.29593