Adaptive User Interface With Parallel Neural Networks for Robot Teleoperation

In recent years, human-robot interaction (HRI) has become an increasingly important field of research. The human experience during HRI tasks like teleoperation or turn-taking largely depends on the interface design between the robot and the user. Designing an intuitive user interface (UI) between an...

Full description

Saved in:
Bibliographic Details
Published inIEEE robotics and automation letters Vol. 10; no. 2; pp. 963 - 970
Main Authors SharafianArdakani, Payman, Hanafy, Mohamed A., Kondaurova, Irina, Ashary, Ali, Rayguru, Madan M., Popa, Dan O.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.02.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, human-robot interaction (HRI) has become an increasingly important field of research. The human experience during HRI tasks like teleoperation or turn-taking largely depends on the interface design between the robot and the user. Designing an intuitive user interface (UI) between an arbitrary <inline-formula><tex-math notation="LaTeX">M</tex-math></inline-formula>-dimensional input device and an <inline-formula><tex-math notation="LaTeX">N</tex-math></inline-formula>-degrees of freedom (DOF) robot remains a significant challenge. This paper proposes a novel UI design approach named the Parallel Neural Networks Adaptive User Interface (PNNUI). PNNUI utilizes two parallel neural networks to learn and then improve the teleoperation performance of users by minimizing task completion time and maximizing motion smoothness. Our method is designed to learn an unintuitive input-output map between user interface hardware and the robot by minimizing task completion time in an offline unsupervised learning scheme based on Neural Networks (NNs) and Genetic Algorithms. Secondly, PNNUI minimizes teleoperation jerk online by adapting the weights of a parallel neural network. We experimentally evaluated the resulting UI for teleoperating a 3-DOF nonholonomic robot through a conventional joystick with three inputs. Twenty human subjects operated the robot along an obstacle course in several conditions. The statistical analysis of the user trial data shows that PNNUI improves the human experience in robot teleoperation by maximizing smoothness while maintaining the completion time of the offline learning scheme. Furthermore, the abstract nature of our formulation enables the customization of performance measures, which extends its applicability to other interface devices and HRI tasks, particularly those that are not intuitive to start with.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2024.3518085