Continuous Semi-autonomous Prosthesis Control Using a Depth Sensor on the Hand

Modern myoelectric prostheses can perform multiple functions (e.g., several grasp types and wrist rotation) but their intuitive control by the user is still an open challenge. It has been recently demonstrated that semi-autonomous control can allow the subjects to operate complex prostheses effectiv...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 16; p. 814973
Main Authors Castro, Miguel Nobre, Dosen, Strahinja
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 25.03.2022
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Modern myoelectric prostheses can perform multiple functions (e.g., several grasp types and wrist rotation) but their intuitive control by the user is still an open challenge. It has been recently demonstrated that semi-autonomous control can allow the subjects to operate complex prostheses effectively; however, this approach often requires placing sensors on the user. The present study proposes a system for semi-autonomous control of a myoelectric prosthesis that requires a single depth sensor placed on the dorsal side of the hand. The system automatically pre-shapes the hand (grasp type, size, and wrist rotation) and allows the user to grasp objects of different shapes, sizes and orientations, placed individually or within cluttered scenes. The system “reacts” to the side from which the object is approached, and enables the user to target not only the whole object but also an object part. Another unique aspect of the system is that it relies on online interaction between the user and the prosthesis; the system reacts continuously on the targets that are in its focus, while the user interprets the movement of the prosthesis to adjust aiming. Experimental assessment was conducted in ten able-bodied participants to evaluate the feasibility and the impact of training on prosthesis-user interaction. The subjects used the system to grasp a set of objects individually (Phase I) and in cluttered scenarios (Phase II), while the time to accomplish the task (TAT) was used as the performance metric. In both phases, the TAT improved significantly across blocks. Some targets (objects and/or their parts) were more challenging, requiring thus significantly more time to handle, but all objects and scenes were successfully accomplished by all subjects. The assessment therefore demonstrated that the system is indeed robust and effective, and that the subjects could successfully learn how to aim with the system after a brief training. This is an important step toward the development of a self-contained semi-autonomous system convenient for clinical applications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
This article was submitted to Control Using a Depth Sensor on the Hand, a section of the journal Frontiers in Neurorobotics
Reviewed by: Enzo Mastinu, Chalmers University of Technology, Sweden; Toshihiro Kawase, Tokyo Medical and Dental University, Japan
Edited by: Feihu Zhang, Northwestern Polytechnical University, China
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2022.814973