HarmonyView: Harmonizing Consistency and Diversity in One-Image-to-3D

Recent progress in single-image 3D generation highlights the importance of multi-view coherency, leveraging 3D priors from large-scale diffusion models pretrained on Internet-scale images. However, the aspect of novel-view diversity remains underexplored within the research landscape due to the ambi...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Woo, Sangmin, Park, Byeongjun, Go, Hyojun, Jin-Young, Kim, Kim, Changick
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 26.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent progress in single-image 3D generation highlights the importance of multi-view coherency, leveraging 3D priors from large-scale diffusion models pretrained on Internet-scale images. However, the aspect of novel-view diversity remains underexplored within the research landscape due to the ambiguity in converting a 2D image into 3D content, where numerous potential shapes can emerge. Here, we aim to address this research gap by simultaneously addressing both consistency and diversity. Yet, striking a balance between these two aspects poses a considerable challenge due to their inherent trade-offs. This work introduces HarmonyView, a simple yet effective diffusion sampling technique adept at decomposing two intricate aspects in single-image 3D generation: consistency and diversity. This approach paves the way for a more nuanced exploration of the two critical dimensions within the sampling process. Moreover, we propose a new evaluation metric based on CLIP image and text encoders to comprehensively assess the diversity of the generated views, which closely aligns with human evaluators' judgments. In experiments, HarmonyView achieves a harmonious balance, demonstrating a win-win scenario in both consistency and diversity.
ISSN:2331-8422
DOI:10.48550/arxiv.2312.15980