Method and device for digital 3D reconstruction
First (PI1) and second (DM1) 3D point clouds representing an object (110) are obtained (e.g. from camera/sensor unit (121)), each comprising 3D points, wherein the accuracy of the second point cloud is lower than the accuracy of the first. Points (e.g. mesh vertices) in the second cloud which are at...
Saved in:
Main Author | |
---|---|
Format | Patent |
Language | English |
Published |
27.05.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | First (PI1) and second (DM1) 3D point clouds representing an object (110) are obtained (e.g. from camera/sensor unit (121)), each comprising 3D points, wherein the accuracy of the second point cloud is lower than the accuracy of the first. Points (e.g. mesh vertices) in the second cloud which are at a distance greater than a threshold from the first point cloud are identified in order to generate a third 3D point cloud, the third point cloud thus being a sub-set of points from the second point cloud. The third point cloud is added to the first to provide a digital representation of the object. The first cloud may be obtained from a photometric passive sensor, and the second may be obtained from a range imaging / Time of Flight (ToF) or LIDAR active sensor. Second cloud points which are closest (not having a distance greater than a threshold) to the first point cloud may be discarded as the higher accuracy first cloud points already represent the object better at those points. |
---|---|
Bibliography: | Application Number: GB20170021564 |