Scene Understanding in Pick-and-Place Tasks: Analyzing Transformations Between Initial and Final Scenes

With robots increasingly collaborating with humans in everyday tasks, it is important to take steps toward robotic systems capable of understanding the environment. This work focuses on scene understanding to detect pick and place tasks given initial and final images from the scene. To this end, a d...

Full description

Saved in:
Bibliographic Details
Main Authors Ghasemi, Seraj, Hosseini, Hamed, Koosheshi, MohammadHossein, Masouleh, Mehdi Tale, Kalhor, Ahmad
Format Journal Article
LanguageEnglish
Published 26.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With robots increasingly collaborating with humans in everyday tasks, it is important to take steps toward robotic systems capable of understanding the environment. This work focuses on scene understanding to detect pick and place tasks given initial and final images from the scene. To this end, a dataset is collected for object detection and pick and place task detection. A YOLOv5 network is subsequently trained to detect the objects in the initial and final scenes. Given the detected objects and their bounding boxes, two methods are proposed to detect the pick and place tasks which transform the initial scene into the final scene. A geometric method is proposed which tracks objects' movements in the two scenes and works based on the intersection of the bounding boxes which moved within scenes. Contrarily, the CNN-based method utilizes a Convolutional Neural Network to classify objects with intersected bounding boxes into 5 classes, showing the spatial relationship between the involved objects. The performed pick and place tasks are then derived from analyzing the experiments with both scenes. Results show that the CNN-based method, using a VGG16 backbone, outscores the geometric method by roughly 12 percentage points in certain scenarios, with an overall success rate of 84.3%.
DOI:10.48550/arxiv.2409.17720