Neural RGB-D Surface Reconstruction

Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality applications for teleconferencing, virtual measuring, virtual room planing, to robotic applications. While current volume-based view synthes...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 6280 - 6291
Main Authors Azinovic, Dejan, Martin-Brualla, Ricardo, Goldman, Dan B, Niebner, Matthias, Thies, Justus
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality applications for teleconferencing, virtual measuring, virtual room planing, to robotic applications. While current volume-based view synthesis methods that use neural radiance fields (NeRFs) show promising results in reproducing the appearance of an object or scene, they do not reconstruct an actual surface. The volumetric representation of the surface based on densities leads to artifacts when a surface is extracted using Marching Cubes, since during optimization, densities are accumulated along the ray and are not used at a single sample point in isolation. Instead of this volumetric representation of the surface, we propose to represent the surface using an implicit function (truncated signed distance function). We show how to incorporate this representation in the NeRF framework, and extend it to use depth measurements from a commodity RGB-D sensor, such as a Kinect. In addition, we propose a pose and camera re-finement technique which improves the overall reconstruction quality. In contrast to concurrent work on integrating depth priors in NeRF which concentrates on novel view synthesis, our approach is able to reconstruct high-quality, metrical 3D reconstructions.
AbstractList Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality applications for teleconferencing, virtual measuring, virtual room planing, to robotic applications. While current volume-based view synthesis methods that use neural radiance fields (NeRFs) show promising results in reproducing the appearance of an object or scene, they do not reconstruct an actual surface. The volumetric representation of the surface based on densities leads to artifacts when a surface is extracted using Marching Cubes, since during optimization, densities are accumulated along the ray and are not used at a single sample point in isolation. Instead of this volumetric representation of the surface, we propose to represent the surface using an implicit function (truncated signed distance function). We show how to incorporate this representation in the NeRF framework, and extend it to use depth measurements from a commodity RGB-D sensor, such as a Kinect. In addition, we propose a pose and camera re-finement technique which improves the overall reconstruction quality. In contrast to concurrent work on integrating depth priors in NeRF which concentrates on novel view synthesis, our approach is able to reconstruct high-quality, metrical 3D reconstructions.
Author Martin-Brualla, Ricardo
Azinovic, Dejan
Niebner, Matthias
Goldman, Dan B
Thies, Justus
Author_xml – sequence: 1
  givenname: Dejan
  surname: Azinovic
  fullname: Azinovic, Dejan
  organization: Technical University of Munich
– sequence: 2
  givenname: Ricardo
  surname: Martin-Brualla
  fullname: Martin-Brualla, Ricardo
  organization: Google Research
– sequence: 3
  givenname: Dan B
  surname: Goldman
  fullname: Goldman, Dan B
  organization: Google Research
– sequence: 4
  givenname: Matthias
  surname: Niebner
  fullname: Niebner, Matthias
  organization: Technical University of Munich
– sequence: 5
  givenname: Justus
  surname: Thies
  fullname: Thies, Justus
  organization: Technical University of Munich
BookMark eNotzMtKxDAUANAoCs6M8wW6KLhuvUmam9yldnQUBpX62A5JcwOFsZU-Fv69gq7O7izFSdd3LMSlhEJKoOvq46U2Cp0rFChVAKCkI7GUiKZEKlEfi4Uy1uQWrDkT63FsAxgFYDW5hbh64nnwh6ze3uab7HUekm84q7npu3Ea5mZq--5cnCZ_GHn970q839-9VQ_57nn7WN3s8laBnnJpUeuo2Xgb0EidUoyJQlBI1kKKpSRFlCJw400onQ_h15g4lZ4tkl6Ji7-3Zeb919B--uF7T86BRKd_AOJeQe8
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/CVPR52688.2022.00619
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Xplore
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 1665469463
9781665469463
EISSN 2575-7075
EndPage 6291
ExternalDocumentID 9880168
Genre orig-research
GrantInformation_xml – fundername: German Research Foundation (DFG)
  funderid: 10.13039/501100001659
GroupedDBID 6IE
6IH
6IL
6IN
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-i203t-17633d3e5a7b6513ffddf9bb269770fd419299fd0eca5b48abba5bdfef4ae7693
IEDL.DBID RIE
IngestDate Wed Jun 26 19:28:27 EDT 2024
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-17633d3e5a7b6513ffddf9bb269770fd419299fd0eca5b48abba5bdfef4ae7693
PageCount 12
ParticipantIDs ieee_primary_9880168
PublicationCentury 2000
PublicationDate 2022-June
PublicationDateYYYYMMDD 2022-06-01
PublicationDate_xml – month: 06
  year: 2022
  text: 2022-June
PublicationDecade 2020
PublicationTitle 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublicationTitleAbbrev CVPR
PublicationYear 2022
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib052007398
ssib042469789
Score 2.5127716
Snippet Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality...
SourceID ieee
SourceType Publisher
StartPage 6280
SubjectTerms 3D from multi-view and sensors; 3D from single images; RGBD sensors and analytics; Vision + graphics
Computer vision
Current measurement
Mixed reality
Planing
Surface reconstruction
Teleconferencing
Three-dimensional displays
Title Neural RGB-D Surface Reconstruction
URI https://ieeexplore.ieee.org/document/9880168
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjZ1LSwMxEMdD25MnlVZ8s6BH0-5usrvJ1WotQqVUK72VPGZAlFakvfjpzez2hXjwtGETlkwSmGQz_98wdm21kCkkwBWC4VKh5kpJyb0qbKgJG15N4uTBU94fy8dJNqmxm40WBgDK4DNoU7G8y_dzt6RfZR0dFluSqzqrqzittFrrtRM-m-sdcjrRhAqh1Uotl8S6030djghuQgFdKWE6c8Lr7ORUKV1Kb58N1p2pIkne28uFbbvvX5zG__b2gLW24r1ouHFLh6wGsya7IgiH-YhGD7f8LnpefqEJzejsuSXItti4d__S7fNVfgT-lsZiQWxHIbyAzBQ2zxKB6D1qa9MwEkWMni54tUYfgzOZlcpYG54eAaUByoF4xBqz-QyOWeQkWolIaTjC-dBJZQudo3aFA4uYxiesSQZOPysExnRl2-nfr8_YHg1xFVF1zhrBDrgIvnthL8tJ-wHE1JeI
link.rule.ids 310,311,786,790,795,796,802,23958,23959,25170,27958,55109
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjZ3fT8IwEMcviA_6pAaMv12ijw7G1m3tqyiiAiEIhjeyrneJ0Qxj4MW_3t7GrxgffGrTNUuva3Ltet_PAVxrFQgfG-hKwsQVkpQrpRCukbG2T-yGV7E4uduL2iPxNA7HJbhZaWEQMQ8-wxpX87t8M03n_Kusruxia0RyC7atn_dUodZarh774khtsNOZJxQHSi70crZ_vfnaHzDehEO6fAZ1RgzY2ciqkjuV1h50l8MpYknea_OZrqXfv0iN_x3vPlTX8j2nv3JMB1DCrAJXjOFIPpzBw61757zMvyix3fj0uWbIVmHUuh822-4iQ4L75nvBjOmOQWACDJNYR2EjIDKGlNa-nYnYI8NXvEqR8TBNQi1korUtDSGJBDkL4iGUs2mGR-CkgrQg4kQc9oSYCqljFZFK4xQ1ke8dQ4UNnHwWEIzJwraTv5svYac97HYmncfe8yns8nQX8VVnULY24bn15DN9kX_AH_nrmt4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=2022+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition+%28CVPR%29&rft.atitle=Neural+RGB-D+Surface+Reconstruction&rft.au=Azinovic%2C+Dejan&rft.au=Martin-Brualla%2C+Ricardo&rft.au=Goldman%2C+Dan+B&rft.au=Niebner%2C+Matthias&rft.date=2022-06-01&rft.pub=IEEE&rft.eissn=2575-7075&rft.spage=6280&rft.epage=6291&rft_id=info:doi/10.1109%2FCVPR52688.2022.00619&rft.externalDocID=9880168