MULTI-MODAL DATA FUSION FOR ENHANCED 3D PERCEPTION FOR PLATFORMS

A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first...

Full description

Saved in:
Bibliographic Details
Main Authors Matei, Bogdan C, Samarasekera, Supun, Ramamurthy, Bhaskar, Kumar, Rakesh, Chiu, Han-Pang
Format Patent
LanguageEnglish
Published 11.06.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.
Bibliography:Application Number: US201916523313