Towards Automatic Object Detection and Activity Recognition in Indoor Climbing
Rock climbing has propelled from niche sport to mainstream free-time activity and Olympic sport. Moreover, climbing can be studied as an example of a high-stakes perception-action task. However, understanding what constitutes an expert climber is not simple or straightforward. As a dynamic and high-...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 24; no. 19; p. 6479 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
MDPI AG
08.10.2024
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Rock climbing has propelled from niche sport to mainstream free-time activity and Olympic sport. Moreover, climbing can be studied as an example of a high-stakes perception-action task. However, understanding what constitutes an expert climber is not simple or straightforward. As a dynamic and high-risk activity, climbing requires a precise interplay between cognition, perception, and precise action execution. While prior research has predominantly focused on the movement aspect of climbing (i.e., skeletal posture and individual limb movements), recent studies have also examined the climber's visual attention and its links to their performance. To associate the climber's attention with their actions, however, has traditionally required frame-by-frame manual coding of the recorded eye-tracking videos. To overcome this challenge and automatically contextualize the analysis of eye movements in indoor climbing, we present deep learning-driven (YOLOv5) hold detection that facilitates automatic grasp recognition. To demonstrate the framework, we examined the expert climber's eye movements and egocentric perspective acquired from eye-tracking glasses (SMI and Tobii Glasses 2). Using the framework, we observed that the expert climber's grasping duration was positively correlated with total fixation duration (
= 0.807) and fixation count (
= 0.864); however, it was negatively correlated with the fixation rate (
= -0.402) and saccade rate (
= -0.344). The findings indicate the moments of cognitive processing and visual search that occurred during decision making and route prospecting. Our work contributes to research on eye-body performance and coordination in high-stakes contexts, and informs the sport science and expands the applications, e.g., in training optimization, injury prevention, and coaching. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1424-8220 1424-8220 |
DOI: | 10.3390/s24196479 |