Can-Do! A Dataset and Neuro-Symbolic Grounded Framework for Embodied Planning with Large Multimodal Models
Large multimodal models have demonstrated impressive problem-solving abilities in vision and language tasks, and have the potential to encode extensive world knowledge. However, it remains an open challenge for these models to perceive, reason, plan, and act in realistic environments. In this work,...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Large multimodal models have demonstrated impressive problem-solving
abilities in vision and language tasks, and have the potential to encode
extensive world knowledge. However, it remains an open challenge for these
models to perceive, reason, plan, and act in realistic environments. In this
work, we introduce Can-Do, a benchmark dataset designed to evaluate embodied
planning abilities through more diverse and complex scenarios than previous
datasets. Our dataset includes 400 multimodal samples, each consisting of
natural language user instructions, visual images depicting the environment,
state changes, and corresponding action plans. The data encompasses diverse
aspects of commonsense knowledge, physical understanding, and safety awareness.
Our fine-grained analysis reveals that state-of-the-art models, including
GPT-4V, face bottlenecks in visual perception, comprehension, and reasoning
abilities. To address these challenges, we propose NeuroGround, a neurosymbolic
framework that first grounds the plan generation in the perceived environment
states and then leverages symbolic planning engines to augment the
model-generated plans. Experimental results demonstrate the effectiveness of
our framework compared to strong baselines. Our code and dataset are available
at https://embodied-planning.github.io. |
---|---|
DOI: | 10.48550/arxiv.2409.14277 |