Solving Motion Planning Tasks with a Scalable Generative Model
As autonomous driving systems being deployed to millions of vehicles, there is a pressing need of improving the system's scalability, safety and reducing the engineering cost. A realistic, scalable, and practical simulator of the driving world is highly desired. In this paper, we present an eff...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | As autonomous driving systems being deployed to millions of vehicles, there
is a pressing need of improving the system's scalability, safety and reducing
the engineering cost. A realistic, scalable, and practical simulator of the
driving world is highly desired. In this paper, we present an efficient
solution based on generative models which learns the dynamics of the driving
scenes. With this model, we can not only simulate the diverse futures of a
given driving scenario but also generate a variety of driving scenarios
conditioned on various prompts. Our innovative design allows the model to
operate in both full-Autoregressive and partial-Autoregressive modes,
significantly improving inference and training speed without sacrificing
generative capability. This efficiency makes it ideal for being used as an
online reactive environment for reinforcement learning, an evaluator for
planning policies, and a high-fidelity simulator for testing. We evaluated our
model against two real-world datasets: the Waymo motion dataset and the nuPlan
dataset. On the simulation realism and scene generation benchmark, our model
achieves the state-of-the-art performance. And in the planning benchmarks, our
planner outperforms the prior arts. We conclude that the proposed generative
model may serve as a foundation for a variety of motion planning tasks,
including data generation, simulation, planning, and online training. Source
code is public at https://github.com/HorizonRobotics/GUMP/ |
---|---|
DOI: | 10.48550/arxiv.2407.02797 |