NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, com...
Saved in:
Published in | IEEE transactions on visualization and computer graphics Vol. 29; no. 5; pp. 2732 - 2742 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.05.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering. Project website: https://bit.ly/nerfplayer. |
---|---|
AbstractList | Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering. Project website: https://bit.ly/nerfplayer. Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering. Project website: https://bit.ly/nerfplayer.Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering. Project website: https://bit.ly/nerfplayer. |
Author | Chen, Lele Chen, Anpei Geiger, Andreas Li, Zhong Song, Liangchen Chen, Zhang Yuan, Junsong Xu, Yi |
Author_xml | – sequence: 1 givenname: Liangchen orcidid: 0000-0002-8366-5088 surname: Song fullname: Song, Liangchen email: lsong8@buffalo.edu organization: SUNY Buffalo, United States – sequence: 2 givenname: Anpei orcidid: 0000-0003-2150-2176 surname: Chen fullname: Chen, Anpei organization: ETH Zürich and University of Tübingen, Germany – sequence: 3 givenname: Zhong orcidid: 0000-0002-7416-1216 surname: Li fullname: Li, Zhong email: zhonglee323@gmail.com organization: OPPO US Research Center, InnoPeak Technology, United States – sequence: 4 givenname: Zhang orcidid: 0000-0001-8582-1024 surname: Chen fullname: Chen, Zhang organization: OPPO US Research Center, InnoPeak Technology, United States – sequence: 5 givenname: Lele orcidid: 0000-0002-7073-0450 surname: Chen fullname: Chen, Lele organization: OPPO US Research Center, InnoPeak Technology, United States – sequence: 6 givenname: Junsong orcidid: 0000-0002-7901-8793 surname: Yuan fullname: Yuan, Junsong organization: SUNY Buffalo, United States – sequence: 7 givenname: Yi orcidid: 0000-0003-2126-6054 surname: Xu fullname: Xu, Yi organization: OPPO US Research Center, InnoPeak Technology, United States – sequence: 8 givenname: Andreas orcidid: 0000-0002-8151-3726 surname: Geiger fullname: Geiger, Andreas organization: University of Tübingen, Germany |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37027699$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kctKAzEUhoMo3h9AEAm4cTM1tyYTd6VaFUSlXlZCyGROMTKTqckM0rd3SiuIC1fnLL7vcPj_PbQZmgAIHVEyoJTo8-fX8fWAEcYHnAlFcraBdqkWNCNDIjf7nSiVMcnkDtpL6YMQKkSut9EOV4QpqfUueruH6eSxsguIF3iEn9oItrZFBfhyEWztHX5yEABPYR4hQWht65uAv3z7ji_BNfW8SVDie-iirfDUlt4GB3jioSrTAdqa2SrB4Xruo5fJ1fP4Jrt7uL4dj-4yx4ekzZTlimnJihx4zqSalZwVLndE86GSnGvKheIFlNJSKzRXtCRSUs21FU47zffR2eruPDafHaTW1D45qCoboOmSYUrnqg-Dqh49_YN-NF0M_XdLilGai6HsqZM11RU1lGYefW3jwvzk1gNqBbjYpBRhZpxfRdNG6ytDiVk2ZJYNmWVDZt1Qb9I_5s_x_5zjleMB4BdPhJa55t-5f5js |
CODEN | ITVGEA |
CitedBy_id | crossref_primary_10_1111_cgf_15062 crossref_primary_10_1109_TASE_2024_3416533 crossref_primary_10_1088_1742_6596_2891_15_152010 crossref_primary_10_1109_TCSVT_2024_3439737 crossref_primary_10_1007_s11431_024_2714_4 crossref_primary_10_1109_LRA_2024_3511399 crossref_primary_10_1007_s41870_023_01470_w crossref_primary_10_1016_j_cag_2024_103984 crossref_primary_10_1109_TMM_2024_3355639 crossref_primary_10_3390_sym16081010 crossref_primary_10_1111_cgf_15014 crossref_primary_10_1016_j_cag_2023_10_002 crossref_primary_10_1109_TVCG_2024_3367431 crossref_primary_10_1111_cgf_14911 crossref_primary_10_1145_3655618 crossref_primary_10_1007_s00371_024_03475_3 crossref_primary_10_1109_TPAMI_2024_3382198 crossref_primary_10_1145_3592455 crossref_primary_10_1109_TVCG_2024_3361502 crossref_primary_10_1109_TVCG_2024_3378692 crossref_primary_10_1145_3687926 crossref_primary_10_1109_TVCG_2024_3372037 crossref_primary_10_1007_s11263_024_02310_5 crossref_primary_10_1145_3687935 |
Cites_doi | 10.1109/CVPR52688.2022.01572 10.1109/ICCV48922.2021.00570 10.1109/ICCV48922.2021.01406 10.1109/CVPR46437.2021.00930 10.1145/3450626.3459863 10.1109/CVPR46437.2021.00741 10.1145/3528233.3530727 10.1109/CVPR46437.2021.01294 10.1109/CVPR52688.2022.01571 10.1111/1467-8659.00509 10.1145/3306346.3322980 10.1109/CVPR46437.2021.01393 10.1145/3550469.3555383 10.1109/ICCV48922.2021.01407 10.1109/cvpr46437.2021.01018 10.1038/s42256-022-00530-3 10.1109/ICCV48922.2021.00582 10.1109/TVCG.2022.3203102 10.1109/3DV57658.2022.00056 10.1109/TIP.2003.819861 10.1109/CVPR46437.2021.00843 10.1109/CVPR52688.2022.00807 10.1145/3414685.3417827 10.1109/CVPR52688.2022.00544 10.1109/CVPR46437.2021.00643 10.1109/iccv48922.2021.01408 10.1145/3450626.3459756 10.1109/CVPR46437.2021.00288 10.1007/978-3-031-19824-3_20 10.1111/cgf.14340 10.1109/CVPR52688.2022.00542 10.1145/3478513.3480487 10.1145/1015706.1015766 10.1109/CVPR.2019.00459 10.1145/2766945 10.1145/237170.237199 10.1109/ICCV48922.2021.00581 10.1109/CVPR46437.2021.01120 10.1109/CVPR52688.2022.01316 10.1109/CVPR52688.2022.00537 10.1007/978-3-030-58452-8_24 10.1145/3386569.3392377 10.1145/3528223.3530127 10.1109/ICCV48922.2021.01352 10.1109/CVPR52729.2023.01594 10.1145/3306346.3323020 10.1109/3DV53792.2021.00099 10.1109/ICCV48922.2021.01272 10.1109/ICCV48922.2021.01245 10.1109/CVPR.2019.00025 10.1109/CVPR52688.2022.00094 10.1007/978-3-031-19790-1_16 10.1109/3DV53792.2021.00118 10.1109/CVPR42600.2020.00541 10.1109/cvpr46437.2021.00713 10.1109/CVPR52688.2022.00538 10.1111/cgf.14505 10.1109/CVPR46437.2021.01432 10.1109/CVPR46437.2021.00166 10.1145/3386569.3392485 10.1109/MMCS.1995.484925 10.1109/ICCV.2019.00548 10.1145/882262.882309 10.1109/93.580394 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TVCG.2023.3247082 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | PubMed Technology Research Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1941-0506 |
EndPage | 2742 |
ExternalDocumentID | 37027699 10_1109_TVCG_2023_3247082 10049689 |
Genre | orig-research Journal Article |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNI RNS RZB TN5 VH1 AAYOK AAYXX CITATION RIG NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c350t-7a372962b8e38267fd32bc8c09357633913473bed6a1a49371d0661939a4c9c93 |
IEDL.DBID | RIE |
ISSN | 1077-2626 1941-0506 |
IngestDate | Fri Jul 11 08:28:41 EDT 2025 Mon Jun 30 05:34:05 EDT 2025 Sun Apr 06 01:21:17 EDT 2025 Tue Jul 01 02:12:17 EDT 2025 Thu Apr 24 23:00:38 EDT 2025 Wed Aug 27 02:18:07 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 5 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c350t-7a372962b8e38267fd32bc8c09357633913473bed6a1a49371d0661939a4c9c93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-8366-5088 0000-0002-7901-8793 0000-0003-2126-6054 0000-0002-7073-0450 0000-0002-7416-1216 0000-0001-8582-1024 0000-0002-8151-3726 0000-0003-2150-2176 |
PMID | 37027699 |
PQID | 2792118456 |
PQPubID | 75741 |
PageCount | 11 |
ParticipantIDs | proquest_miscellaneous_2798710717 crossref_citationtrail_10_1109_TVCG_2023_3247082 pubmed_primary_37027699 ieee_primary_10049689 proquest_journals_2792118456 crossref_primary_10_1109_TVCG_2023_3247082 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-05-01 |
PublicationDateYYYYMMDD | 2023-05-01 |
PublicationDate_xml | – month: 05 year: 2023 text: 2023-05-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on visualization and computer graphics |
PublicationTitleAbbrev | TVCG |
PublicationTitleAlternate | IEEE Trans Vis Comput Graph |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | Bemana (ref3) 2020; 39 ref13 ref57 ref12 ref56 ref15 ref59 ref58 ref52 ref11 ref55 ref54 Yang (ref79) 2002 ref18 Wang (ref69) 2021 Jang (ref23) 2022 Gortler (ref19) 1996 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref7 ref9 Liu (ref35) 2022 ref4 ref6 Boss (ref5) 2021 ref82 ref81 ref40 ref84 ref83 ref80 ref34 ref78 ref37 Wu (ref75) 2022 ref31 ref74 Gan (ref16) 2022 ref77 ref32 ref76 Chibane (ref10) 2020 ref2 ref1 ref39 ref38 Liu (ref36) 2020 Kosiorek (ref27) 2021 ref71 Kobayashi (ref25) 2022 Paszke (ref53) 2019; 32 ref73 ref72 Sharma (ref60) 2022 Li (ref33) 2022 ref24 ref68 ref67 ref26 ref20 ref64 Chan (ref8) 2022 ref63 ref22 ref66 ref21 ref65 Gao (ref17) 2022 Li (ref30) 2022 ref28 Wang (ref70) 2022 ref29 Fang (ref14) ref62 ref61 |
References_xml | – start-page: 33 year: 2020 ident: ref10 article-title: Neural unsigned distance fields for implicit function learning publication-title: Advances in Neural Information Processing Systems – ident: ref74 doi: 10.1109/CVPR52688.2022.01572 – ident: ref80 doi: 10.1109/ICCV48922.2021.00570 – ident: ref13 doi: 10.1109/ICCV48922.2021.01406 – ident: ref76 doi: 10.1109/CVPR46437.2021.00930 – ident: ref39 doi: 10.1145/3450626.3459863 – ident: ref61 doi: 10.1109/CVPR46437.2021.00741 – ident: ref63 doi: 10.1145/3528233.3530727 – ident: ref81 doi: 10.1109/CVPR46437.2021.01294 – start-page: 43 year: 1996 ident: ref19 article-title: The lumigraph publication-title: SIGGRAPH – year: 2021 ident: ref5 article-title: Neural-pil: Neural pre-integrated lighting for reflectance decomposition publication-title: Advances in Neural Information Processing Systems – year: 2022 ident: ref60 article-title: Seeing 3d objects in a single image via self-supervised static-dynamic disentanglement publication-title: arXiv preprint – ident: ref43 doi: 10.1109/CVPR52688.2022.01571 – volume-title: arXiv:2111.15552, 2021 ident: ref14 article-title: Neusample: Neural sample field for efficient view synthesis – ident: ref59 doi: 10.1111/1467-8659.00509 – ident: ref44 doi: 10.1145/3306346.3322980 – ident: ref56 doi: 10.1109/CVPR46437.2021.01393 – ident: ref15 doi: 10.1145/3550469.3555383 – year: 2020 ident: ref36 article-title: Neural sparse voxel fields publication-title: Advances in Neural Information Processing Systems – ident: ref57 doi: 10.1109/ICCV48922.2021.01407 – volume-title: arXiv preprint year: 2022 ident: ref16 article-title: V4d: Voxel for 4d novel view synthesis – year: 2022 ident: ref17 article-title: Monocular dynamic view synthesis: A reality check publication-title: Neural Information Processing Systems (Neurips) – ident: ref55 doi: 10.1109/cvpr46437.2021.01018 – ident: ref37 doi: 10.1038/s42256-022-00530-3 – volume-title: ArXiv, abs/2212.00190 year: 2022 ident: ref70 article-title: Mixed neural voxels for fast multi-view video synthesis – ident: ref20 doi: 10.1109/ICCV48922.2021.00582 – ident: ref12 doi: 10.1109/TVCG.2022.3203102 – ident: ref67 doi: 10.1109/3DV57658.2022.00056 – ident: ref72 doi: 10.1109/TIP.2003.819861 – ident: ref73 doi: 10.1109/CVPR46437.2021.00843 – ident: ref65 doi: 10.1109/CVPR52688.2022.00807 – volume: 32 start-page: 8026 year: 2019 ident: ref53 article-title: Pytorch: An imperative style, high-performance deep learning library publication-title: Advances in neural information processing systems – volume-title: arXiv preprint year: 2021 ident: ref69 article-title: Neural trajectory fields for dynamic novel view synthesis – volume: 39 start-page: 1 issue: 6 year: 2020 ident: ref3 article-title: X-fields: Im-plicit neural view-, light-and time-image interpolation publication-title: ACM Transactions on Graphics (TOG) doi: 10.1145/3414685.3417827 – ident: ref31 doi: 10.1109/CVPR52688.2022.00544 – ident: ref32 doi: 10.1109/CVPR46437.2021.00643 – year: 2022 ident: ref75 article-title: D2nerf: Self-supervised decoupling of dynamic and static objects from a monocular video publication-title: arXiv preprint – ident: ref18 doi: 10.1109/iccv48922.2021.01408 – year: 2022 ident: ref23 article-title: D-tensorf: Tensorial radiance fields for dynamic scenes publication-title: ArXiv, abs/2212.02375 – year: 2022 ident: ref35 article-title: Devrf: Fast deformable voxel radiance fields for dynamic scenes publication-title: arXiv preprint – ident: ref82 doi: 10.1145/3450626.3459756 – ident: ref49 doi: 10.1109/CVPR46437.2021.00288 – ident: ref9 doi: 10.1007/978-3-031-19824-3_20 – ident: ref47 doi: 10.1111/cgf.14340 – ident: ref58 doi: 10.1109/CVPR52688.2022.00542 – ident: ref52 doi: 10.1145/3478513.3480487 – ident: ref84 doi: 10.1145/1015706.1015766 – ident: ref42 doi: 10.1109/CVPR.2019.00459 – start-page: 16123 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2022 ident: ref8 article-title: I – ident: ref11 doi: 10.1145/2766945 – ident: ref29 doi: 10.1145/237170.237199 – ident: ref51 doi: 10.1109/ICCV48922.2021.00581 – ident: ref64 doi: 10.1109/CVPR46437.2021.01120 – ident: ref71 doi: 10.1109/CVPR52688.2022.01316 – ident: ref83 doi: 10.1109/CVPR52688.2022.00537 – ident: ref45 doi: 10.1007/978-3-030-58452-8_24 – start-page: 5742 volume-title: International Conference on Machine Learning year: 2021 ident: ref27 article-title: Nerf-vae: A geometry aware 3d scene generative model – ident: ref40 doi: 10.1145/3386569.3392377 – ident: ref46 doi: 10.1145/3528223.3530127 – ident: ref78 doi: 10.1109/ICCV48922.2021.01352 – ident: ref1 doi: 10.1109/CVPR52729.2023.01594 – ident: ref38 doi: 10.1145/3306346.3323020 – ident: ref68 doi: 10.1109/3DV53792.2021.00099 – start-page: 77 year: 2002 ident: ref79 article-title: A real-time distributed light field camera publication-title: Rendering Techniques – volume-title: Neural Information Processing Systems (Neurips) year: 2022 ident: ref30 article-title: Streaming radiance fields for 3d video synthesis – ident: ref66 doi: 10.1109/ICCV48922.2021.01272 – ident: ref4 doi: 10.1109/ICCV48922.2021.01245 – ident: ref50 doi: 10.1109/CVPR.2019.00025 – ident: ref21 doi: 10.1109/CVPR52688.2022.00094 – ident: ref28 doi: 10.1007/978-3-031-19790-1_16 – ident: ref54 doi: 10.1109/3DV53792.2021.00118 – ident: ref2 doi: 10.1109/CVPR42600.2020.00541 – ident: ref41 doi: 10.1109/cvpr46437.2021.00713 – ident: ref62 doi: 10.1109/CVPR52688.2022.00538 – ident: ref77 doi: 10.1111/cgf.14505 – year: 2022 ident: ref25 article-title: Decomposing nerf for editing via feature field distillation publication-title: arXiv – ident: ref34 doi: 10.1109/CVPR46437.2021.01432 – ident: ref26 doi: 10.1109/CVPR46437.2021.00166 – ident: ref6 doi: 10.1145/3386569.3392485 – ident: ref22 doi: 10.1109/MMCS.1995.484925 – volume-title: Eurographics Symposium on Rendering year: 2022 ident: ref33 article-title: Neulf: Efficient novel view synthesis with neural 4d light field – ident: ref48 doi: 10.1109/ICCV.2019.00548 – ident: ref7 doi: 10.1145/882262.882309 – ident: ref24 doi: 10.1109/93.580394 |
SSID | ssj0014489 |
Score | 2.703165 |
Snippet | Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2732 |
SubjectTerms | Cameras Decomposition Dynamics free-viewpoint video Image reconstruction immersive video Modelling NeRF Neural rendering Reconstruction Rendering Rendering (computer graphics) Representations Spatiotemporal phenomena Streaming media Three-dimensional displays Websites |
Title | NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields |
URI | https://ieeexplore.ieee.org/document/10049689 https://www.ncbi.nlm.nih.gov/pubmed/37027699 https://www.proquest.com/docview/2792118456 https://www.proquest.com/docview/2798710717 |
Volume | 29 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BJ3oAyjMUKlfiVCkhG2fjmBuCLgiJVbWFikOlyHYmF7bZit298Os7Y2dXFAnUW6Q4z_Hj-zwz3wCcqJ6RuTMqbmRq4tzWeWxViWQQ1-jUNAYdbw3cDovr-_zmof_QJav7XBhE9MFnmPCh9-XXEzfnrbJTVjfTRalXYZWYW0jWWroMiGfoEGCo4oxgeufC7KX69O7nxVXCdcITgg8qLbN_FiFfVeVtgOkXmsEmDBevGOJLHpP5zCbu-ZV6439_wxZsdJBTnIc-8hFWsN2GDy-ECHfg1xBHg-9jQ_j7TJwLdlWb35xUJS5DxXrxw9GkKEY-brZLV2oFb-KKS-Sw9MkUa8FKH_Skkdc7cCgGHB433YX7wbe7i-u4q7sQO9lPZ7Ey7MsrMluiJPahmlpm1pWOfaY0HUl21itpsS5Mz-QsqFcTcCEkqE3utNNyD9baSYsHIArb9HWhHM0jmEtD6KNpbC0xU7Z0tC5GkC4MUblOlJxrY4wrT05SXbHtKrZd1dkugq_LS_4ERY73Gu-yCV40DH8_gqOFuatu0E4r1lIkvkWQMoIvy9M03NiHYlqczH0bophMgiPYD91keXOpiOMXWh--8dBPsM7vFsIlj2Bt9jTHY4I0M_vZd-W_ZDnvZw |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB5V5QAc-C0QKGAkuCAlzcbZOEbiUHVZtrRdoWWLeqgUbGciIdosYneFyrvwKjwbM042KkjlVolbpDhx4hmPv_HMfAZ4rnpGps6osJKxCVNbpqFVOZJAXKVjUxl0vDVwMM5Gh-m7o_7RGvzsamEQ0SefYcSXPpZfztySt8q2mN1MZ7lucyj38Ow7eWjz17sDEueLJBm-me6MwvYQgdDJfrwIleHAVJbYHCVBaVWVMrEudxwApLklOfKspMUyMz2TMjtcSaswwRptUqcdcy2Rhb9CQKOfNOVhXZCCPBvdpDSqMCHHoA2a9mK9Nf248zbik8kjAiwqzpM_lj1_jsvFkNYvbcOb8Gs1KE1Gy5doubCR-_EXX-R_O2q34EYLqsV2MwtuwxrWd-D6OarFu3A8xsnw_YkhD-OV2BYcjDenXDYmBme1Of3sxAdHZl9MfGZwW5BVC96mFgPkxPvZHEvBXCbU08QzOjgUQ04AnG_A4aX83j1Yr2c1PgCR2aqvM-XIUmIqDeGrqrKlxETZ3NHKH0C8EnzhWtp1Pv3jpPDuV6wL1pWCdaVodSWAl90jXxvOkX813mCRn2vYSDuAzZV6Fa1ZmhfMFkkeJYHmAJ51t8mgcJTI1Dhb-jbkRLObH8D9Ri27l0sVJyrT-uEFnT6Fq6PpwX6xvzveewTX-Dub5NBNWF98W-JjAnAL-8RPIwGfLlsDfwOTu0jn |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=NeRFPlayer%3A+A+Streamable+Dynamic+Scene+Representation+with+Decomposed+Neural+Radiance+Fields&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Song%2C+Liangchen&rft.au=Chen%2C+Anpei&rft.au=Li%2C+Zhong&rft.au=Chen%2C+Zhang&rft.date=2023-05-01&rft.pub=IEEE&rft.issn=1077-2626&rft.volume=29&rft.issue=5&rft.spage=2732&rft.epage=2742&rft_id=info:doi/10.1109%2FTVCG.2023.3247082&rft.externalDocID=10049689 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon |