Online Human Capability Estimation Through Reinforcement Learning and Interaction

Service robots are expected to assist users in a constantly growing range of environments and tasks. People may be unique in many ways, and online adaptation of robots is central to personalized assistance. We focus on collaborative tasks in which the human collaborator may not be fully ablebodied,...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 7984 - 7991
Main Authors Sun, Chengke, Cohn, Anthony G., Leonetti, Matteo
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Service robots are expected to assist users in a constantly growing range of environments and tasks. People may be unique in many ways, and online adaptation of robots is central to personalized assistance. We focus on collaborative tasks in which the human collaborator may not be fully ablebodied, with the aim for the robot to automatically determine the best level of support. We propose a methodology for online adaptation based on Reinforcement Learning and Bayesian inference. As the Reinforcement Learning process continuously adjusts the robot's behavior, the actions that become part of the improved policy are used by the Bayesian inference module as local evidence of human capability, which can be generalized across the state space. The estimated capabilities are then used as pre-conditions to collaborative actions, so that the robot can quickly disable actions that the person seems unable to perform. We demonstrate and validate our approach on two simulated tasks and one real-world collaborative task across a range of motion and sensing capabilities.
ISSN:2153-0866
DOI:10.1109/IROS55552.2023.10341868