Pick-and-place application development using voice and visual commands

Purpose - The purpose of this paper is to design an interactive industrial robotic system which can be used to assist a "layperson" in re-casting a generic pick-and-place application. A user can program a pick-and-place application simply by pointing to objects in the work area and speakin...

Full description

Saved in:
Bibliographic Details
Published inIndustrial robot Vol. 39; no. 6; pp. 592 - 600
Main Authors van Delden, Sebastian, Umrysh, Michael, Rosario, Carlos, Hess, Gregory
Format Journal Article
LanguageEnglish
Published Bedford Emerald Group Publishing Limited 01.01.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Purpose - The purpose of this paper is to design an interactive industrial robotic system which can be used to assist a "layperson" in re-casting a generic pick-and-place application. A user can program a pick-and-place application simply by pointing to objects in the work area and speaking simple and intuitive natural language commands.Design methodology approach - The system was implemented in C# using the EMGU wrapper classes for OpenCV as well as the MS Speech Recognition API. The target language to be recognized was modelled using traditional augmented transition networks which were implemented as XML Grammars. The authors developed an original finger-pointing algorithm using a unique combination of standard morphological and image processing techniques. Recognized voice commands trigger the vision component to capture what a user is pointing at. If the specified action requires robot movement, the required information is sent to the robot control component of the system, which then transmits the commands to the robot controller for execution.Findings - The voice portion of the system was tested on the factory floor in a "typical" manufacturing environment, which was right at the maximum allowable average decibel level specified by OSHA. The findings show that a modern standard MS Speech API voice recognition system can achieve a 100 per cent accuracy of simple commands; although at the noisy levels of 89 decibels on average, every one out of six commands had to be repeated. The vision component was test of 72 test subjects who had no prior knowledge of this work. The system accurately recognized what the test subjects were pointing at 95 per cent of the time within five seconds of hand readjusting.Research limitations implications - The vision component suffers from the "typical" problems: very shiny surfaces can cause problems; very poor contrast between the pointing hand and the background; and occlusions. Currently the system can only handle a limited amount of depth recovery using a spring mounted gripper. A second camera (future work) needs to be incorporated in order to handle large depth variations in the work area.Practical implications - This system could have a huge impact on how factory floor workers interact with robotic equipment.Originality value - The testing of the voice system on a factory floor, although simple, is very important. It proves the viability of this component of the system and debunks arguments that factories are simply too noisy for current voice technology. The unique finger-pointing algorithm developed by the authors is also an important contribution to the field. In particular, the manner in which the pointing vector was constructed. Furthermore, very few papers report results of non-experts using their pointing algorithms. The paper reports concrete results that show the system is intuitive and user friendly to "laypersons".
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-2
content type line 23
ISSN:0143-991X
1758-5791
DOI:10.1108/01439911211268796