Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty
When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
04.03.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | When autonomous robots interact with humans, such as during autonomous
driving, explicit safety guarantees are crucial in order to avoid potentially
life-threatening accidents. Many data-driven methods have explored learning
probabilistic bounds over human agents' trajectories (i.e. confidence tubes
that contain trajectories with probability $\delta$), which can then be used to
guarantee safety with probability $1-\delta$. However, almost all existing
works consider $\delta \geq 0.001$. The purpose of this paper is to argue that
(1) in safety-critical applications, it is necessary to provide safety
guarantees with $\delta < 10^{-8}$, and (2) current learning-based methods are
ill-equipped to compute accurate confidence bounds at such low $\delta$. Using
human driving data (from the highD dataset), as well as synthetically generated
data, we show that current uncertainty models use inaccurate distributional
assumptions to describe human behavior and/or require infeasible amounts of
data to accurately learn confidence bounds for $\delta \leq 10^{-8}$. These two
issues result in unreliable confidence bounds, which can have dangerous
implications if deployed on safety-critical systems. |
---|---|
DOI: | 10.48550/arxiv.2103.03388 |