3D Reconstruction Study of Motion Fuzzy Coded and Non-coded Targets Based on Iterative Relaxation Method

Robotics and computer vision researchers are struggling to accurately and robustly recreate 3D models of moving objects. This work provides a novel approach based on iterative relaxation for reconstructing the motion of both fuzzy-coded and non-coded targets. To overcome motion blur and the lack of...

Full description

Saved in:
Bibliographic Details
Published inJournal of electrical engineering & technology Vol. 20; no. 5; pp. 3525 - 3536
Main Author Shi, Yun
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 01.07.2025
대한전기학회
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robotics and computer vision researchers are struggling to accurately and robustly recreate 3D models of moving objects. This work provides a novel approach based on iterative relaxation for reconstructing the motion of both fuzzy-coded and non-coded targets. To overcome motion blur and the lack of distinguishing characteristics in circular retro-reflective targets, we use deep convolutional neural networks (DCNNs) for feature extraction and representation learning. To do this, we gather a large dataset of motion-blurred images with circular retro-reflective objects and train DCNN architecture. Through this training, the network learned to discriminate and extract relevant information from targets.After training, the DCNN extracts features from motion images. An iterative relaxation strategy refines the original 3D reconstruction (3D-R) estimations. This approach uses DCNN features and motion temporal coherence to refine estimates constantly. Relaxation assures that the reconstructed 3D motion matches the observed images and enables motion blur uncertainty.The efficiency of the suggested technique in recreating the 3D motion of both fuzzy-coded and non-coded targets is shown by our experimental findings. We demonstrate that our approach outperforms conventional techniques. The suggested technique has the potential for use in computer vision, robotics, and augmented reality.
ISSN:1975-0102
2093-7423
DOI:10.1007/s42835-024-02133-x