Confidence-Based Hybrid Tracking to Overcome Visual Tracking Failures in Calibration-Less Vision-Guided Micromanipulation

This article proposes a confidence-based approach for combining two visual tracking techniques to minimize the influence of unforeseen visual tracking failures to achieve uninterrupted vision-based control. Despite research efforts in vision-guided micromanipulation, existing systems are not designe...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automation science and engineering Vol. 17; no. 1; pp. 524 - 536
Main Authors Yang, Liangjing, Paranawithana, Ishara, Youcef-Toumi, Kamal, Tan, U-Xuan
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This article proposes a confidence-based approach for combining two visual tracking techniques to minimize the influence of unforeseen visual tracking failures to achieve uninterrupted vision-based control. Despite research efforts in vision-guided micromanipulation, existing systems are not designed to overcome visual tracking failures, such as inconsistent illumination condition, regional occlusion, unknown structures, and nonhomogenous background scene. There remains a gap in expanding current procedures beyond the laboratory environment for practical deployment of vision-guided micromanipulation system. A hybrid tracking method, which combines motion-cue feature detection and score-based template matching, is incorporated in an uncalibrated vision-guided workflow capable of self-initializing and recovery during the micromanipulation. Weighted average, based on the respective confidence indices of the motion-cue feature localization and template-based trackers, is inferred from the statistical accuracy of feature locations and the similarity score-based template matches. Results suggest improvement of the tracking performance using hybrid tracking under the conditions. The mean errors of hybrid tracking are maintained at subpixel level under adverse experimental conditions while the original template matching approach has mean errors of 1.53, 1.73, and 2.08 pixels. The method is also demonstrated to be robust in the nonhomogeneous scene with an array of plant cells. By proposing a self-contained fusion method that overcomes unforeseen visual tracking failures using pure vision approach, we demonstrated the robustness in our developed low-cost micromanipulation platform.
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2019.2932724