Automated visual positioning and precision placement of a workpiece using deep learning

An automated visual positioning system is proposed for precision placement of a workpiece on the fixture. The system includes a binocular eye-in-hand system on the end effector of the mobile manipulator and a ConvNet for detecting the relative position of the workpiece based on the holistic views ob...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of advanced manufacturing technology Vol. 104; no. 9-12; pp. 4527 - 4538
Main Authors Li, Chih-Hung G., Chang, Yu-Ming
Format Journal Article
LanguageEnglish
Published London Springer London 01.10.2019
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An automated visual positioning system is proposed for precision placement of a workpiece on the fixture. The system includes a binocular eye-in-hand system on the end effector of the mobile manipulator and a ConvNet for detecting the relative position of the workpiece based on the holistic views observed by the CMOS cameras. We train the ConvNets with training images that are automatically generated from basis images taken at the target position and annotated with the 2D coordinates of the offset locations. The ConvNet’s superior place recognition capability renders a high success rate of coordinate detection subject to high illumination and viewpoint variations. Experimental evidence of workpiece placement confirms that the low-resolution (640 × 480 pixels) camera can obtain a translational precision of ± 0.2 mm; the binocular system can control the rotational error within ± 0.1°. Within the 20 × 20-mm 2 spatial tolerance of the mobile platform, the proposed system achieves a success rate of 100% in 200 workpiece placement tasks. The entire workpiece placement task can be completed in 60 s; the average elapsed time of precision positioning and placement is less than 20 s, with a total of 4 visual positioning steps.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0268-3768
1433-3015
DOI:10.1007/s00170-019-04293-x