Scene Understanding in Pick-and-Place Tasks: Analyzing Transformations Between Initial and Final Scenes
With robots increasingly collaborating with humans in everyday tasks, it is important to take steps toward robotic systems capable of understanding the environment. This work focuses on scene understanding to detect pick and place tasks given initial and final images from the scene. To this end, a d...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With robots increasingly collaborating with humans in everyday tasks, it is
important to take steps toward robotic systems capable of understanding the
environment. This work focuses on scene understanding to detect pick and place
tasks given initial and final images from the scene. To this end, a dataset is
collected for object detection and pick and place task detection. A YOLOv5
network is subsequently trained to detect the objects in the initial and final
scenes. Given the detected objects and their bounding boxes, two methods are
proposed to detect the pick and place tasks which transform the initial scene
into the final scene. A geometric method is proposed which tracks objects'
movements in the two scenes and works based on the intersection of the bounding
boxes which moved within scenes. Contrarily, the CNN-based method utilizes a
Convolutional Neural Network to classify objects with intersected bounding
boxes into 5 classes, showing the spatial relationship between the involved
objects. The performed pick and place tasks are then derived from analyzing the
experiments with both scenes. Results show that the CNN-based method, using a
VGG16 backbone, outscores the geometric method by roughly 12 percentage points
in certain scenarios, with an overall success rate of 84.3%. |
---|---|
DOI: | 10.48550/arxiv.2409.17720 |