Multiview Shape and Reflectance from Natural Illumination

The world is full of objects with complex reflectances, situated in complex illumination environments. Past work on full 3D geometry recovery, however, has tried to handle this complexity by framing it into simplistic models of reflectance (Lambetian, mirrored, or diffuse plus specular) or illuminat...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE Conference on Computer Vision and Pattern Recognition pp. 2163 - 2170
Main Authors Oxholm, Geoffrey, Nishino, Ko
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2014
Subjects
Online AccessGet full text
ISSN1063-6919
1063-6919
2575-7075
DOI10.1109/CVPR.2014.277

Cover

Loading…
More Information
Summary:The world is full of objects with complex reflectances, situated in complex illumination environments. Past work on full 3D geometry recovery, however, has tried to handle this complexity by framing it into simplistic models of reflectance (Lambetian, mirrored, or diffuse plus specular) or illumination (one or more point light sources). Though there has been some recent progress in directly utilizing such complexities for recovering a single view geometry, it is not clear how such single-view methods can be extended to reconstruct the full geometry. To this end, we derive a probabilistic geometry estimation method that fully exploits the rich signal embedded in complex appearance. Though each observation provides partial and unreliable information, we show how to estimate the reflectance responsible for the diverse appearance, and unite the orientation cues embedded in each observation to reconstruct the underlying geometry. We demonstrate the effectiveness of our method on synthetic and real-world objects. The results show that our method performs accurately across a wide range of real-world environments and reflectances that lies between the extremes that have been the focus of past work.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1063-6919
1063-6919
2575-7075
DOI:10.1109/CVPR.2014.277