Real-Time Highly Accurate Dense Depth on a Power Budget Using an FPGA-CPU Hybrid SoC

Obtaining highly accurate depth from stereo images in real time has many applications across computer vision and robotics, but in some contexts, upper bounds on power consumption constrain the feasible hardware to embedded platforms such as FPGAs. Whilst various stereo algorithms have been deployed...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems. II, Express briefs Vol. 66; no. 5; pp. 773 - 777
Main Authors Rahnama, Oscar, Cavallari, Tommaso, Golodetz, Stuart, Tonioni, Alessio, Joy, Thomas, Di Stefano, Luigi, Walker, Simon, Torr, Philip H. S.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Obtaining highly accurate depth from stereo images in real time has many applications across computer vision and robotics, but in some contexts, upper bounds on power consumption constrain the feasible hardware to embedded platforms such as FPGAs. Whilst various stereo algorithms have been deployed on these platforms, usually cut down to better match the embedded architecture, certain key parts of the more advanced algorithms, e.g., those that rely on unpredictable access to memory or are highly iterative in nature, are difficult to deploy efficiently on FPGAs, and thus the depth quality that can be achieved is limited. In this brief, we leverage an FPGA-CPU chip to propose a novel, sophisticated, stereo approach that combines the best features of semi-global matching and ELAS-based methods to compute highly accurate dense depth in real time. Our approach achieves an 8.7% error rate on the challenging KITTI 2015 dataset at over 50 frames/s, with a power consumption of only 5 W.
ISSN:1549-7747
1558-3791
DOI:10.1109/TCSII.2019.2909169