Method and device for digital 3D reconstruction

First (PI1) and second (DM1) 3D point clouds representing an object (110) are obtained (e.g. from camera/sensor unit (121)), each comprising 3D points, wherein the accuracy of the second point cloud is lower than the accuracy of the first. Points (e.g. mesh vertices) in the second cloud which are at...

Full description

Saved in:
Bibliographic Details
Main Author Hervé Le Floch
Format Patent
LanguageEnglish
Published 27.05.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:First (PI1) and second (DM1) 3D point clouds representing an object (110) are obtained (e.g. from camera/sensor unit (121)), each comprising 3D points, wherein the accuracy of the second point cloud is lower than the accuracy of the first. Points (e.g. mesh vertices) in the second cloud which are at a distance greater than a threshold from the first point cloud are identified in order to generate a third 3D point cloud, the third point cloud thus being a sub-set of points from the second point cloud. The third point cloud is added to the first to provide a digital representation of the object. The first cloud may be obtained from a photometric passive sensor, and the second may be obtained from a range imaging / Time of Flight (ToF) or LIDAR active sensor. Second cloud points which are closest (not having a distance greater than a threshold) to the first point cloud may be discarded as the higher accuracy first cloud points already represent the object better at those points.
Bibliography:Application Number: GB20170021564