Human Intention Inference Using Expectation-Maximization Algorithm With Online Model Learning
An algorithm called adaptive-neural-intention estimator (ANIE) is presented to infer the intent of a human operator's arm movements based on the observations from a 3-D camera sensor (Microsoft Kinect). Intentions are modeled as the goal locations of reaching motions in 3-D space. Human arm...
Saved in:
Published in | IEEE transactions on automation science and engineering Vol. 14; no. 2; pp. 855 - 868 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.04.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | An algorithm called adaptive-neural-intention estimator (ANIE) is presented to infer the intent of a human operator's arm movements based on the observations from a 3-D camera sensor (Microsoft Kinect). Intentions are modeled as the goal locations of reaching motions in 3-D space. Human arm's nonlinear motion dynamics are modeled using an unknown nonlinear function with intentions represented as parameters. The unknown model is learned by using a neural network. Based on the learned model, an approximate expectation-maximization algorithm is developed to infer human intentions. Furthermore, an identifier-based online model learning algorithm is developed to adapt to the variations in the arm motion dynamics, the motion trajectory, the goal locations, and the initial conditions of different human subjects. The results of experiments conducted on data obtained from different users performing a variety of reaching motions are presented. The ANIE algorithm is compared with an unsupervised Gaussian mixture model algorithm and an Euclidean distance-based approach by using Cornell's CAD-120 data set and data collected in the Robotics and Controls Laboratoy at UConn. The ANIE algorithm is compared with the inverse LQR and ATCRF algorithms using a labeling task carried out on the CAD-120 data set. |
---|---|
ISSN: | 1545-5955 1558-3783 |
DOI: | 10.1109/TASE.2016.2624279 |