Global Planning for Contact-Rich Manipulation via Local Smoothing of Quasi-Dynamic Contact Models

The empirical success of reinforcement learning (RL) in contact-rich manipulation leaves much to be understood from a model-based perspective, where the key difficulties are often attributed to 1) the explosion of contact modes, 2) stiff, nonsmooth contact dynamics and the resulting exploding/discon...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on robotics Vol. 39; no. 6; pp. 4691 - 4711
Main Authors Pang, Tao, Suh, H. J. Terry, Yang, Lujie, Tedrake, Russ
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The empirical success of reinforcement learning (RL) in contact-rich manipulation leaves much to be understood from a model-based perspective, where the key difficulties are often attributed to 1) the explosion of contact modes, 2) stiff, nonsmooth contact dynamics and the resulting exploding/discontinuous gradients, and 3) the nonconvexity of the planning problem. The stochastic nature of RL addresses 1) and 2) by effectively sampling and averaging the contact modes. On the other hand, model-based methods have tackled the same challenges by smoothing contact dynamics analytically. Our first contribution is to establish the theoretical equivalence of the two smoothing schemes for simple systems, and provide qualitative and empirical equivalence on several complex examples. In order to further alleviate 2), our second contribution is a convex, differentiable, and quasi-dynamic formulation of contact dynamics, which is amenable to both smoothing schemes, and has proven to be highly effective for contact-rich planning. Our final contribution resolves 3), where we show that classical sampling-based motion planning algorithms can be effective in global planning when contact modes are abstracted via smoothing. Applying our method on several challenging contact-rich manipulation tasks, we demonstrate that efficient model-based motion planning can achieve results comparable to RL, but with dramatically less computation.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2023.3300230