Fusion Feature Offsets Estimation Network for 6D Pose Estimation
Object pose estimation involves calculating the six degrees of freedom pose of a rigid object. A typical approach involves utilizing convolution neural networks to detect keypoints and subsequently estimating pose from these points. Investigations focused on combining RGB images and point clouds to...
Saved in:
Published in | 2023 28th International Conference on Automation and Computing (ICAC) pp. 1 - 6 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
30.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Object pose estimation involves calculating the six degrees of freedom pose of a rigid object. A typical approach involves utilizing convolution neural networks to detect keypoints and subsequently estimating pose from these points. Investigations focused on combining RGB images and point clouds to extract keypoints remain a subject of interest. In this work, we propose an offsets estimation network based on fused features to estimate pose. We first propose a more effective encoding method for fuse image features into point cloud features. Then we design an offsets estimate network to calculate the keypoint's location. Next, we estimate multiple poses according to the keypoints predicted above in two levels. Finally, we aggregate multiple poses into the final pose according to the weight of each pose. Our experiments show that our method outperforms other approaches in the LineMOD benchmark dataset. |
---|---|
DOI: | 10.1109/ICAC57885.2023.10275159 |