Fast and Accurate Spacecraft Pose Estimation From Single Shot Space Imagery Using Box Reliability and Keypoints Existence Judgments

Real-time 6DOF (6 Degree of Freedom) pose estimation of an uncooperative spacecraft is an important part of proximity operations, e.g., space debris removal, spacecraft rendezvous and docking, on-orbit servicing, etc. In this article, a novel efficient deep learning based approach is proposed to est...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 8; pp. 216283 - 216297
Main Authors Huo, Yurong, Li, Zhi, Zhang, Feng
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Real-time 6DOF (6 Degree of Freedom) pose estimation of an uncooperative spacecraft is an important part of proximity operations, e.g., space debris removal, spacecraft rendezvous and docking, on-orbit servicing, etc. In this article, a novel efficient deep learning based approach is proposed to estimate the 6DOF pose of uncooperative spacecraft using monocular-vision measurement. Firstly, we introduce a new lightweight YOLO-liked CNN to detect spacecraft and predict 2D locations of the projected keypoints of a prior reconstructed 3D model in real-time. Then, we design two novel models for predicting the bounding box (bbox) reliability scores and the probability of keypoints existence. The two models not only significantly reduce the false positive, but also speed up convergence. Finally, the 6DOF pose is estimated and refined using Perspective-n-Point and geometric optimizer. Results demonstrate that the proposed approach achieves 73.2% average precision and 77.6% average recall for spacecraft detection on the SPEED dataset after only 200 training epochs. For the pose estimation task, the mean rotational error is 0.6812°, and the mean translation error is 0.0320m. The proposed approach achieves competitive pose estimation performance and extreme lightweight (<inline-formula> <tex-math notation="LaTeX">\sim ~0.89 </tex-math></inline-formula> million learnable weights in total) on the SPEED dataset while being efficient for real-time applications.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3041415