Linking vision and motion for self-supervised object-centric perception

Object-centric representations enable autonomous driving algorithms to reason about interactions between many independent agents and scene features. Traditionally these representations have been obtained via supervised learning, but this decouples perception from the downstream driving task and coul...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Stocking, Kaylene C, Zak Murez, Badrinarayanan, Vijay, Shotton, Jamie, Kendall, Alex, Tomlin, Claire, Burgess, Christopher P
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Object-centric representations enable autonomous driving algorithms to reason about interactions between many independent agents and scene features. Traditionally these representations have been obtained via supervised learning, but this decouples perception from the downstream driving task and could harm generalization. In this work we adapt a self-supervised object-centric vision model to perform object decomposition using only RGB video and the pose of the vehicle as inputs. We demonstrate that our method obtains promising results on the Waymo Open perception dataset. While object mask quality lags behind supervised methods or alternatives that use more privileged information, we find that our model is capable of learning a representation that fuses multiple camera viewpoints over time and successfully tracks many vehicles and pedestrians in the dataset. Code for our model is available at https://github.com/wayveai/SOCS.
ISSN:2331-8422
DOI:10.48550/arxiv.2307.07147