Soft 3D reconstruction for view synthesis
We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show t...
Saved in:
Published in | ACM transactions on graphics Vol. 36; no. 6; pp. 1 - 11 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
20.11.2017
|
Online Access | Get full text |
Cover
Loading…
Abstract | We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty. During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays. |
---|---|
AbstractList | We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is the formulation of a soft 3D representation that preserves depth uncertainty through each stage of 3D reconstruction and rendering. We show that this representation is beneficial throughout the view synthesis pipeline. During view synthesis, it provides a soft model of scene geometry that provides continuity across synthesized views and robustness to depth uncertainty. During 3D reconstruction, the same robust estimates of scene visibility can be applied iteratively to improve depth estimation around object edges. Our algorithm is based entirely on O(1) filters, making it conducive to acceleration and it works with structured or unstructured sets of input views. We compare with recent classical and learning-based algorithms on plenoptic lightfields, wide baseline captures, and lightfield videos produced from camera arrays. |
Author | Penner, Eric Zhang, Li |
Author_xml | – sequence: 1 givenname: Eric surname: Penner fullname: Penner, Eric organization: Google Inc – sequence: 2 givenname: Li surname: Zhang fullname: Zhang, Li organization: Google Inc |
BookMark | eNp1j7FOwzAURS1UJNLCzJqVIe1znl8cj6hAQarEAMyR49jCqMTINqD-PRQyITGd6VzdM2ezMYyWsXMOS84FrZAjtADLHxIdsYITyUpi085YARKhAgR-wuYpvQBAI0RTsIuH4HKJV2W0Jowpx3eTfRhLF2L54e1nmfZjfrbJp1N27PQu2bOJC_Z0c_24vq2295u79eW2MijaXGll6kH2yhjDBTgFg0NNlgzXrSFHve2HRmPvwBInKQUqEr2qpZLKiBpxweh318SQUrSuMz7rw6kctd91HLpDbzf1dlPvt7f6471F_6rj_l_jC9L8V6k |
CitedBy_id | crossref_primary_10_1080_01431161_2023_2214275 crossref_primary_10_1145_3355089_3356555 crossref_primary_10_1111_cgf_13480 crossref_primary_10_1109_ACCESS_2020_2989453 crossref_primary_10_1016_j_vrih_2019_12_001 crossref_primary_10_1109_TVCG_2023_3320248 crossref_primary_10_1111_cgf_14339 crossref_primary_10_1109_TIP_2022_3154242 crossref_primary_10_1364_BOE_521612 crossref_primary_10_1109_TPAMI_2023_3242709 crossref_primary_10_1007_s11265_023_01874_8 crossref_primary_10_1007_s10044_021_00956_2 crossref_primary_10_1109_TETCI_2023_3272003 crossref_primary_10_1109_TPAMI_2020_2986777 crossref_primary_10_1007_s40747_024_01696_6 crossref_primary_10_1109_TVCG_2019_2898757 crossref_primary_10_1111_cgf_14342 crossref_primary_10_1016_j_image_2021_116366 crossref_primary_10_1145_3197517_3201323 crossref_primary_10_1007_s11548_024_03080_8 crossref_primary_10_1016_j_cag_2024_103913 crossref_primary_10_1016_j_cviu_2024_104031 crossref_primary_10_1109_TPAMI_2023_3245815 crossref_primary_10_1145_3306346_3323007 crossref_primary_10_1109_TPAMI_2019_2960689 crossref_primary_10_1007_s10559_023_00626_7 crossref_primary_10_1145_3450626_3459756 crossref_primary_10_1007_s00371_023_02863_5 crossref_primary_10_1007_s00371_022_02651_7 crossref_primary_10_1007_s11042_021_10615_7 crossref_primary_10_1016_j_image_2022_116852 crossref_primary_10_1109_TVCG_2022_3150512 crossref_primary_10_1111_cgf_13860 crossref_primary_10_1111_cgf_13862 crossref_primary_10_3788_IRLA20240347 crossref_primary_10_1109_TVCG_2023_3320220 crossref_primary_10_1109_TIP_2023_3321458 crossref_primary_10_1145_3386569_3392485 crossref_primary_10_1016_j_vrih_2020_04_004 crossref_primary_10_1126_scirobotics_aaw0863 crossref_primary_10_1145_3469842 crossref_primary_10_1007_s11042_023_16250_8 crossref_primary_10_1109_ACCESS_2020_3023505 crossref_primary_10_1111_cgf_15012 crossref_primary_10_1088_1742_6596_1880_1_012034 crossref_primary_10_3390_app11020671 crossref_primary_10_1109_ACCESS_2020_3004431 crossref_primary_10_11834_jig_221188 crossref_primary_10_1111_cgf_13479 crossref_primary_10_1364_OE_26_034894 crossref_primary_10_3390_electronics10010082 crossref_primary_10_1007_s44267_024_00039_w crossref_primary_10_1016_j_neucom_2020_09_048 crossref_primary_10_1145_3528223_3530107 crossref_primary_10_1145_3272127_3275084 crossref_primary_10_1145_3450626_3459849 crossref_primary_10_1155_2022_4570755 crossref_primary_10_1109_ACCESS_2022_3230949 crossref_primary_10_1007_s41870_023_01470_w crossref_primary_10_1109_TVCG_2023_3290543 crossref_primary_10_1109_TIP_2020_2980130 crossref_primary_10_1007_s41095_022_0323_3 crossref_primary_10_1016_j_engappai_2024_107930 crossref_primary_10_1109_TPAMI_2020_3026039 crossref_primary_10_1109_MMUL_2022_3232771 crossref_primary_10_1145_3550454_3555524 crossref_primary_10_1145_3528223_3530153 crossref_primary_10_1111_cgf_13849 crossref_primary_10_3390_app13042447 crossref_primary_10_1109_TVCG_2022_3204608 crossref_primary_10_1109_TIP_2019_2922099 crossref_primary_10_1007_s41095_021_0225_9 crossref_primary_10_1016_j_optlaseng_2021_106726 crossref_primary_10_1145_3476576_3476609 crossref_primary_10_1145_3476576_3476729 crossref_primary_10_1016_j_inffus_2023_01_011 crossref_primary_10_1109_TIM_2022_3222501 crossref_primary_10_1109_TVCG_2022_3176832 crossref_primary_10_1109_TVCG_2019_2898799 crossref_primary_10_1109_TPAMI_2023_3335311 crossref_primary_10_1145_3592433 crossref_primary_10_1364_OE_419069 crossref_primary_10_1155_2022_7181445 crossref_primary_10_1145_3414685_3417767 crossref_primary_10_4218_etrij_2021_0205 crossref_primary_10_1002_cav_1894 crossref_primary_10_1145_3233311 crossref_primary_10_1109_TIP_2021_3122089 crossref_primary_10_1109_TVCG_2021_3067768 crossref_primary_10_1109_TCI_2022_3160671 crossref_primary_10_1111_cgf_14474 crossref_primary_10_1145_3306346_3322980 crossref_primary_10_1145_3306346_3323035 crossref_primary_10_1109_TIP_2021_3066293 crossref_primary_10_1109_TVCG_2019_2957761 crossref_primary_10_1109_ACCESS_2023_3314340 crossref_primary_10_3390_app142210557 crossref_primary_10_1016_j_displa_2025_102996 crossref_primary_10_1109_TIP_2021_3051761 crossref_primary_10_1109_TPAMI_2022_3217957 crossref_primary_10_1145_3355089_3356528 crossref_primary_10_1145_3272127_3275031 crossref_primary_10_1109_TIM_2021_3100326 crossref_primary_10_3390_s21196680 crossref_primary_10_1109_TPAMI_2023_3289333 crossref_primary_10_1109_TVCG_2024_3372152 crossref_primary_10_1117_1_JEI_28_1_013049 crossref_primary_10_1109_TIP_2019_2923323 crossref_primary_10_1145_3306346_3323020 crossref_primary_10_1016_j_image_2022_116763 crossref_primary_10_1016_j_cag_2022_07_019 crossref_primary_10_1109_TPAMI_2023_3337516 crossref_primary_10_1111_cgf_14646 crossref_primary_10_1145_3414685_3417785 crossref_primary_10_3724_SP_J_2096_5796_2018_0004 crossref_primary_10_1007_s11263_023_01829_3 crossref_primary_10_1109_TPAMI_2021_3073739 crossref_primary_10_1109_TIP_2023_3290523 crossref_primary_10_1007_s11263_023_01803_z crossref_primary_10_1145_3384535 crossref_primary_10_1016_j_cag_2019_07_010 crossref_primary_10_1109_LSP_2024_3358098 crossref_primary_10_3788_AOS230549 |
Cites_doi | 10.1145/2185520.2185596 10.1145/1179352.1141964 10.1109/ICCV.2015.398 10.1561/0600000052 10.1109/CVPR.2015.7298762 10.1109/ICME.2011.6012131 10.1109/ICCV.2009.5459417 10.1109/CVPR.2014.196 10.1145/2980179.2980251 10.1109/CVPR.2013.242 10.1111/j.1467-8659.2011.01981.x 10.1016/j.cviu.2006.02.005 10.1109/TVCG.2016.2532329 10.1145/2980179.2980257 10.1145/218380.218398 10.1109/ICCV.2001.937668 10.1145/344779.344958 10.1145/166117.166153 10.1145/383259.383309 10.1109/ICCV.2013.89 10.1109/TPAMI.2013.147 10.1007/s11263-006-7899-4 10.1145/1964921.1964965 10.1109/TPAMI.2003.1206509 10.1109/TIP.2003.819861 10.1145/2487228.2487238 10.1145/37401.37435 10.1109/ICCV.2013.13 10.5555/1886063.1886065 10.1145/2508363.2508369 10.1023/A:1008192912624 |
ContentType | Journal Article |
DBID | AAYXX CITATION |
DOI | 10.1145/3130800.3130855 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1557-7368 |
EndPage | 11 |
ExternalDocumentID | 10_1145_3130800_3130855 |
GroupedDBID | --Z -DZ -~X .DC 23M 2FS 4.4 5GY 5VS 6J9 85S 8US AAKMM AALFJ AAYFX AAYXX ABPPZ ACGFO ACGOD ACM ADBCU ADL ADMLS AEBYY AEFXT AEJOY AENEX AENSD AETEA AFWIH AFWXC AIKLT AKRVB ALMA_UNASSIGNED_HOLDINGS ASPBG AVWKF BDXCO CCLIF CITATION CS3 EBS EJD F5P FEDTE GUFHI HGAVV I07 LHSKQ P1C P2P PQQKQ RNS ROL TWZ UHB UPT WH7 XSW ZCA ~02 |
ID | FETCH-LOGICAL-c348t-a9c2d7b9ccc140f90df3a5e5c1a8c5f5bebd6a3bf0e5157743954b927979c4233 |
ISSN | 0730-0301 |
IngestDate | Thu Jul 03 08:25:12 EDT 2025 Thu Apr 24 23:13:01 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c348t-a9c2d7b9ccc140f90df3a5e5c1a8c5f5bebd6a3bf0e5157743954b927979c4233 |
OpenAccessLink | https://dl.acm.org/doi/pdf/10.1145/3130800.3130855?download=true |
PageCount | 11 |
ParticipantIDs | crossref_citationtrail_10_1145_3130800_3130855 crossref_primary_10_1145_3130800_3130855 |
PublicationCentury | 2000 |
PublicationDate | 2017-11-20 |
PublicationDateYYYYMMDD | 2017-11-20 |
PublicationDate_xml | – month: 11 year: 2017 text: 2017-11-20 day: 20 |
PublicationDecade | 2010 |
PublicationTitle | ACM transactions on graphics |
PublicationYear | 2017 |
References | Strecha C. (e_1_2_1_28_1); 1 e_1_2_1_20_1 Kang Sing Bing (e_1_2_1_17_1) e_1_2_1_23_1 e_1_2_1_24_1 e_1_2_1_21_1 e_1_2_1_22_1 e_1_2_1_27_1 e_1_2_1_25_1 e_1_2_1_26_1 e_1_2_1_29_1 e_1_2_1_7_1 e_1_2_1_31_1 e_1_2_1_8_1 e_1_2_1_30_1 e_1_2_1_5_1 e_1_2_1_6_1 Flynn John (e_1_2_1_9_1) 2016 e_1_2_1_3_1 e_1_2_1_12_1 e_1_2_1_35_1 e_1_2_1_4_1 e_1_2_1_13_1 e_1_2_1_34_1 e_1_2_1_1_1 e_1_2_1_10_1 e_1_2_1_33_1 e_1_2_1_2_1 e_1_2_1_11_1 e_1_2_1_32_1 e_1_2_1_16_1 e_1_2_1_14_1 e_1_2_1_15_1 e_1_2_1_36_1 e_1_2_1_18_1 e_1_2_1_19_1 |
References_xml | – ident: e_1_2_1_26_1 doi: 10.1145/2185520.2185596 – ident: e_1_2_1_7_1 – volume: 1 volume-title: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. ident: e_1_2_1_28_1 – ident: e_1_2_1_27_1 doi: 10.1145/1179352.1141964 – ident: e_1_2_1_32_1 doi: 10.1109/ICCV.2015.398 – ident: e_1_2_1_10_1 doi: 10.1561/0600000052 – ident: e_1_2_1_15_1 doi: 10.1109/CVPR.2015.7298762 – ident: e_1_2_1_14_1 doi: 10.1109/ICME.2011.6012131 – ident: e_1_2_1_25_1 doi: 10.1109/ICCV.2009.5459417 – ident: e_1_2_1_36_1 doi: 10.1109/CVPR.2014.196 – ident: e_1_2_1_16_1 doi: 10.1145/2980179.2980251 – ident: e_1_2_1_21_1 doi: 10.1109/CVPR.2013.242 – ident: e_1_2_1_5_1 doi: 10.1111/j.1467-8659.2011.01981.x – volume-title: CVPR (1) ident: e_1_2_1_17_1 – ident: e_1_2_1_12_1 doi: 10.1016/j.cviu.2006.02.005 – ident: e_1_2_1_35_1 doi: 10.1109/TVCG.2016.2532329 – ident: e_1_2_1_1_1 doi: 10.1145/2980179.2980257 – ident: e_1_2_1_23_1 doi: 10.1145/218380.218398 – ident: e_1_2_1_18_1 doi: 10.1109/ICCV.2001.937668 – ident: e_1_2_1_20_1 doi: 10.1145/344779.344958 – ident: e_1_2_1_6_1 doi: 10.1145/166117.166153 – ident: e_1_2_1_2_1 doi: 10.1145/383259.383309 – ident: e_1_2_1_3_1 – ident: e_1_2_1_31_1 doi: 10.1109/ICCV.2013.89 – ident: e_1_2_1_34_1 doi: 10.1109/TPAMI.2013.147 – ident: e_1_2_1_8_1 doi: 10.1007/s11263-006-7899-4 – ident: e_1_2_1_11_1 doi: 10.1145/1964921.1964965 – ident: e_1_2_1_29_1 doi: 10.1109/TPAMI.2003.1206509 – ident: e_1_2_1_33_1 doi: 10.1109/TIP.2003.819861 – ident: e_1_2_1_4_1 doi: 10.1145/2487228.2487238 – ident: e_1_2_1_24_1 doi: 10.1145/37401.37435 – ident: e_1_2_1_22_1 doi: 10.1109/ICCV.2013.13 – volume-title: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). year: 2016 ident: e_1_2_1_9_1 – ident: e_1_2_1_13_1 doi: 10.5555/1886063.1886065 – ident: e_1_2_1_19_1 doi: 10.1145/2508363.2508369 – ident: e_1_2_1_30_1 doi: 10.1023/A:1008192912624 |
SSID | ssj0006446 |
Score | 2.6613092 |
Snippet | We present a novel algorithm for view synthesis that utilizes a soft 3D reconstruction to improve quality, continuity and robustness. Our main contribution is... |
SourceID | crossref |
SourceType | Enrichment Source Index Database |
StartPage | 1 |
Title | Soft 3D reconstruction for view synthesis |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3Pa8IwFA6bXrbD2E_mftHDDhsS1zaNbY-iGzJ0DFTwVpI0BWF0A71sf_1e0rTGOcHt0kqooe2n3_tem_c9hG5p6ou2n7k4IpGLIR4HmAkZYB6mIE5FqEyv1GqLl3Z_EjxP6XTZdU1Xlyx4S3z9WlfyH1RhDHBVVbJ_QLaaFAbgM-ALW0AYtlthPAIObZJeU2e1lROsXjmoC1Lmnznou_lsbkvQTneoGkOUXcL16wJtW22te3-VZUcuxZNrD5cHM_thAQQgz8O-a3EK_KGxyoIK-jecR0MckqK7TUmKhSuJAd9mOM8KlQVNrpNwoPwqCERHUKMtvS-seFftrn-EoWpxYFEqTRMzQWIm2EV1H1IB4LJ6pzccjKp4C4pOv5EuL80YOMEUDz_OwdIelogYH6IDo_6dTgHlEdqR-THatzwhT9C9AtUhPWcVVAdAdRSoTgXqKZo8PY67fWz6WWBBgmiBWSz8NOSxEALS2ix204wwKqnwWCRoRrnkaZsRnrkSVGaoUkUa8NgP4zAWIHvJGarl77k8Rw5lrvQYSDcWA-tKGaWxMraMssDnNGOigVrllSbCmL2rniNvyYa720B31Rc-Cp-TTYdebH_oJdpb_gqvUA3umbwGEbfgNwbFb5NsPpE |
linkProvider | EBSCOhost |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Soft+3D+reconstruction+for+view+synthesis&rft.jtitle=ACM+transactions+on+graphics&rft.au=Penner%2C+Eric&rft.au=Zhang%2C+Li&rft.date=2017-11-20&rft.issn=0730-0301&rft.eissn=1557-7368&rft.volume=36&rft.issue=6&rft.spage=1&rft.epage=11&rft_id=info:doi/10.1145%2F3130800.3130855&rft.externalDBID=n%2Fa&rft.externalDocID=10_1145_3130800_3130855 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0730-0301&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0730-0301&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0730-0301&client=summon |