De‐NeRF: Ultra‐high‐definition NeRF with deformable net alignment
Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in high‐resolution environments, especially when upscaled to ultra‐high resolution (e.g., 4k). Specifically, existing NeRF‐based methods face s...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 3 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in high‐resolution environments, especially when upscaled to ultra‐high resolution (e.g., 4k). Specifically, existing NeRF‐based methods face severe limitations in reconstructing high‐resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over‐smoothing of details. In this paper, we present a novel and effective framework, called De‐NeRF, based on NeRF and deformable convolutional network, to achieve high‐fidelity view synthesis in ultra‐high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high‐resolution data. (2) Presenting a density sparse voxel‐based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high‐resolution NeRF methods, our approach improves the rendering quality of high‐frequency details and achieves better visual effects in 4K high‐resolution scenes.
We present a novel framework, De‐NeRF, for achieving high‐fidelity view synthesis in ultra‐high resolution scenes. The key technical components of De‐NeRF includes a hybrid volumetric representation that can significantly speed up the training, and a deformable alignment unit module that can solve the problem of misaligned input of the high‐resolution data. |
---|---|
AbstractList | Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in high‐resolution environments, especially when upscaled to ultra‐high resolution (e.g., 4k). Specifically, existing NeRF‐based methods face severe limitations in reconstructing high‐resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over‐smoothing of details. In this paper, we present a novel and effective framework, called De‐NeRF, based on NeRF and deformable convolutional network, to achieve high‐fidelity view synthesis in ultra‐high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high‐resolution data. (2) Presenting a density sparse voxel‐based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high‐resolution NeRF methods, our approach improves the rendering quality of high‐frequency details and achieves better visual effects in 4K high‐resolution scenes.
We present a novel framework, De‐NeRF, for achieving high‐fidelity view synthesis in ultra‐high resolution scenes. The key technical components of De‐NeRF includes a hybrid volumetric representation that can significantly speed up the training, and a deformable alignment unit module that can solve the problem of misaligned input of the high‐resolution data. Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in high‐resolution environments, especially when upscaled to ultra‐high resolution (e.g., 4k). Specifically, existing NeRF‐based methods face severe limitations in reconstructing high‐resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over‐smoothing of details. In this paper, we present a novel and effective framework, called De‐NeRF, based on NeRF and deformable convolutional network, to achieve high‐fidelity view synthesis in ultra‐high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high‐resolution data. (2) Presenting a density sparse voxel‐based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high‐resolution NeRF methods, our approach improves the rendering quality of high‐frequency details and achieves better visual effects in 4K high‐resolution scenes. Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in high‐resolution environments, especially when upscaled to ultra‐high resolution (e.g., 4k). Specifically, existing NeRF‐based methods face severe limitations in reconstructing high‐resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over‐smoothing of details. In this paper, we present a novel and effective framework, called De‐NeRF , based on NeRF and deformable convolutional network, to achieve high‐fidelity view synthesis in ultra‐high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high‐resolution data. (2) Presenting a density sparse voxel‐based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high‐resolution NeRF methods, our approach improves the rendering quality of high‐frequency details and achieves better visual effects in 4K high‐resolution scenes. |
Author | Zhang, Xiaopeng Hou, Jianing Wu, Zhongqi Zhang, Runjie Guo, Jianwei Meng, Weiliang |
Author_xml | – sequence: 1 givenname: Jianing surname: Hou fullname: Hou, Jianing organization: Institute of Automation, Chinese Academy of Sciences – sequence: 2 givenname: Runjie surname: Zhang fullname: Zhang, Runjie organization: University of California San Diego – sequence: 3 givenname: Zhongqi surname: Wu fullname: Wu, Zhongqi organization: Chinese Academy of Sciences – sequence: 4 givenname: Weiliang orcidid: 0000-0002-3221-4981 surname: Meng fullname: Meng, Weiliang organization: Institute of Automation, Chinese Academy of Sciences – sequence: 5 givenname: Xiaopeng orcidid: 0000-0002-0092-6474 surname: Zhang fullname: Zhang, Xiaopeng organization: Institute of Automation, Chinese Academy of Sciences – sequence: 6 givenname: Jianwei orcidid: 0000-0002-3376-1725 surname: Guo fullname: Guo, Jianwei email: jianwei.guo@nlpr.ia.ac.cn organization: Institute of Automation, Chinese Academy of Sciences |
BookMark | eNp1kMtKw0AUhgepYFsFHyHgxk3q3JO6K9VWoSiIFXfDZC7tlHRSJ6mlOx_BZ_RJnFhx5-o_l49z4OuBjq-8AeAcwQGCEF8p-T7AmMIj0EWM8pTi7LXzV3N0Anp1vYokxwh2wfTGfH18PpinyXUyL5sgY7d0i2UMbazzrnGVT9p9snPNMonDKqxlUZrEmyaRpVv4tfHNKTi2sqzN2W_2wXxy-zy-S2eP0_vxaJYqzChMGS1IbiyXWA-VggQNLeFKU1pwyKHFRhmmVaG1zCkhVuU6Y1hzlRUZpIwNSR9cHO5uQvW2NXUjVtU2-PhSEJghDnNEcaQuD5QKVV0HY8UmuLUMe4GgaDWJqEm0miKaHtCdK83-X06MRy8__Dd2VmxX |
Cites_doi | 10.1007/978-3-030-58580-8_31 10.1109/CVPR.2018.00411 10.1109/CVPR52688.2022.00538 10.1109/CVPR46437.2021.00455 10.1145/3503161.3547808 10.1109/CVPR42600.2020.00342 10.1109/ICCV48922.2021.01386 10.1109/TPAMI.2015.2439281 10.1145/3528223.3530127 10.1109/CVPR.2019.00025 10.1109/ICCV.2019.00239 10.1109/ICCV.2019.00780 10.1016/j.robot.2021.103755 10.1109/2945.468400 10.1109/CVPR46437.2021.00713 10.1109/ICCV.2017.89 10.1109/3DV.2015.66 10.1145/3306346.3322980 10.1109/CVPR52729.2023.00013 10.1109/CVPR42600.2020.00251 10.1007/978-3-030-58598-3_4 10.1109/ICCV48922.2021.01271 10.1109/CVPR42600.2020.00356 10.1109/CVPR.2019.00609 10.1109/CVPR.2019.00459 10.1007/s11263-013-0654-8 10.1109/ICCV.2017.481 10.1109/ICCV48922.2021.00581 10.1145/3503250 10.1109/ICCV48922.2021.01490 |
ContentType | Journal Article |
Copyright | 2024 John Wiley & Sons Ltd. 2024 John Wiley & Sons, Ltd. |
Copyright_xml | – notice: 2024 John Wiley & Sons Ltd. – notice: 2024 John Wiley & Sons, Ltd. |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.1002/cav.2240 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Visual Arts |
EISSN | 1546-427X |
EndPage | n/a |
ExternalDocumentID | 10_1002_cav_2240 CAV2240 |
Genre | article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China funderid: 62172416; 62262043; 62376271; U21A20515; U22B2034 – fundername: Guangdong Basic and Applied Basic Research Foundation funderid: 2023B1515120026 – fundername: Beijing Natural Science Foundation funderid: L231013 – fundername: CAS Youth Innovation Promotion Association funderid: 2022131 |
GroupedDBID | .3N .4S .DC .GA .Y3 05W 0R~ 10A 1L6 1OB 1OC 29F 31~ 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 6J9 702 7PT 8-0 8-1 8-3 8-4 8-5 930 A03 AAESR AAEVG AAHQN AAMMB AAMNL AANHP AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ABPVW ACAHQ ACBWZ ACCZN ACGFS ACPOU ACRPL ACSCC ACXBN ACXQS ACYXJ ADBBV ADEOM ADIZJ ADKYN ADMGS ADMLS ADNMO ADOZA ADXAS ADZMN AEFGJ AEIGN AEIMD AENEX AEUYR AFBPY AFFPM AFGKR AFWVQ AFZJQ AGHNM AGQPQ AGXDD AGYGG AHBTC AIDQK AIDYY AITYG AIURR AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ARCSS ASPBG ATUGU AUFTA AVWKF AZBYB AZFZN AZVAB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM DU5 EBS EDO EJD F00 F01 F04 F5P FEDTE G-S G.N GNP GODZA HF~ HGLYW HHY HVGLF HZ~ I-F ITG ITH IX1 J0M JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LW6 LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N9A NF~ O66 O9- OIG P2W P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RX1 RYL SUPJJ TN5 TUS UB1 V2E V8K W8V W99 WBKPD WIH WIK WQJ WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAHHS AAYXX ACCFJ ADZOD AEEZP AEQDE AIWBW AJBDE CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c2540-54b38ef6a2d9cc0319f36cd44b6060f2ece5dcbdda8433fc8d752d6c7b7045593 |
IEDL.DBID | DR2 |
ISSN | 1546-4261 |
IngestDate | Sat Jul 26 03:40:53 EDT 2025 Tue Jul 01 02:42:24 EDT 2025 Wed Aug 20 07:26:33 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2540-54b38ef6a2d9cc0319f36cd44b6060f2ece5dcbdda8433fc8d752d6c7b7045593 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-0092-6474 0000-0002-3376-1725 0000-0002-3221-4981 |
PQID | 3071608142 |
PQPubID | 2034909 |
PageCount | 14 |
ParticipantIDs | proquest_journals_3071608142 crossref_primary_10_1002_cav_2240 wiley_primary_10_1002_cav_2240_CAV2240 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | May/June 2024 2024-05-00 20240501 |
PublicationDateYYYYMMDD | 2024-05-01 |
PublicationDate_xml | – month: 05 year: 2024 text: May/June 2024 |
PublicationDecade | 2020 |
PublicationPlace | Chichester |
PublicationPlace_xml | – name: Chichester |
PublicationTitle | Computer animation and virtual worlds |
PublicationYear | 2024 |
Publisher | Wiley Subscription Services, Inc |
Publisher_xml | – name: Wiley Subscription Services, Inc |
References | 2014; 106 2021; 65 2015; 38 2023 2022 2021 2020 2019 2019; 38 2018 2017 2022; 41 2016 2021; 142 2015 2020; 33 2014 2012; 25:84–90 1995; 1 e_1_2_9_30_1 Schonberger JL (e_1_2_9_29_1) 2016 Lindenberger P (e_1_2_9_9_1) 2021 e_1_2_9_11_1 Goodfellow I (e_1_2_9_31_1) 2014 e_1_2_9_34_1 e_1_2_9_10_1 Krizhevsky A (e_1_2_9_44_1) 2012; 25 e_1_2_9_13_1 e_1_2_9_32_1 e_1_2_9_12_1 Pumarola A (e_1_2_9_23_1) 2021 Liu L (e_1_2_9_5_1) 2020; 33 Zeng Y (e_1_2_9_33_1) 2020 Blau Y (e_1_2_9_43_1) 2018 e_1_2_9_15_1 e_1_2_9_38_1 e_1_2_9_14_1 e_1_2_9_39_1 e_1_2_9_17_1 e_1_2_9_36_1 e_1_2_9_16_1 e_1_2_9_37_1 e_1_2_9_19_1 e_1_2_9_18_1 e_1_2_9_41_1 e_1_2_9_42_1 e_1_2_9_20_1 e_1_2_9_40_1 e_1_2_9_22_1 Ledig C (e_1_2_9_26_1) 2017 e_1_2_9_21_1 e_1_2_9_24_1 e_1_2_9_8_1 e_1_2_9_7_1 e_1_2_9_6_1 e_1_2_9_4_1 e_1_2_9_2_1 Dong C (e_1_2_9_28_1) 2014 Barron JT (e_1_2_9_3_1) 2021 Wang X (e_1_2_9_35_1) 2019 e_1_2_9_25_1 e_1_2_9_27_1 |
References_xml | – start-page: 536 year: 2015 end-page: 544 – start-page: 523 year: 2020 end-page: 540 – start-page: 4578 year: 2021 end-page: 4587 – start-page: 14124 year: 2021 end-page: 14133 – start-page: 5865 year: 2021 end-page: 5874 – start-page: 6445 year: 2022 end-page: 6454 – start-page: 184 year: 2014 end-page: 199 – start-page: 7210 year: 2021 end-page: 7219 – start-page: 4104 year: 2016 end-page: 4113 – start-page: 46 year: 2023 end-page: 55 – start-page: 165 year: 2019 end-page: 174 – start-page: 27 year: 2014 – volume: 65 start-page: 99 issue: 1 year: 2021 end-page: 106 article-title: NeRF: Representing scenes as neural radiance fields for view synthesis publication-title: Commun ACM – start-page: 7708 year: 2019 end-page: 7717 – start-page: 528 year: 2020 end-page: 543 – volume: 25:84–90 year: 2012 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv Neural Inf Proces Syst – start-page: 4491 year: 2017 end-page: 4500 – volume: 38 start-page: 1 issue: 4 year: 2019 end-page: 14 article-title: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines publication-title: ACM Trans Graph – start-page: 4681 year: 2017 end-page: 4690 – start-page: 764 year: 2017 end-page: 773 – volume: 142 year: 2021 article-title: ColMap: A memory‐efficient occupancy grid mapping framework publication-title: Robot Auton Syst – start-page: 3907 year: 2018 end-page: 3916 – start-page: 5987 year: 2021 end-page: 5997 – start-page: 2304 year: 2019 end-page: 2314 – volume: 106 start-page: 172 year: 2014 end-page: 191 article-title: A super‐resolution framework for high‐accuracy multiview reconstruction publication-title: Int J Comput Vis – year: 2019 article-title: EDVR: Video restoration with enhanced deformable convolutional networks publication-title: Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA:IEEE – volume: 33 start-page: 15651 year: 2020 end-page: 15663 article-title: Neural sparse voxel fields publication-title: Adv Neural Inf Proces Syst – start-page: 10318 year: 2021 end-page: 10327 – volume: 41 start-page: 1 issue: 4 year: 2022 end-page: 15 article-title: Instant neural graphics primitives with a multiresolution hash encoding publication-title: ACM Trans Graph – start-page: 6228 year: 2018 end-page: 6237 – start-page: 3504 year: 2020 end-page: 3515 – year: 2022 – start-page: 15182 year: 2021 end-page: 15192 – volume: 38 start-page: 295 issue: 2 year: 2015 end-page: 307 article-title: Image super‐resolution using deep convolutional networks publication-title: IEEE Trans Pattern Anal Mach Intell – start-page: 5855 year: 2021 end-page: 5864 – start-page: 51 year: 2020 end-page: 67 – start-page: 2437 year: 2020 end-page: 2445 – start-page: 4460 year: 2019 end-page: 4470 – start-page: 5459 year: 2022 end-page: 5469 – start-page: 5939 year: 2019 end-page: 5948 – start-page: 12949 year: 2021 end-page: 12958 – start-page: 3360 year: 2020 end-page: 3369 – volume: 1 start-page: 99 issue: 2 year: 1995 end-page: 108 article-title: Optical models for direct volume rendering publication-title: IEEE Trans Vis Comput Graph – start-page: 6228 volume-title: IEEE computer vision and pattern recognition (CVPR) year: 2018 ident: e_1_2_9_43_1 – start-page: 528 volume-title: European Conference on Computer Vision (ECCV), Glasgow, UK: Springer year: 2020 ident: e_1_2_9_33_1 – volume: 25 year: 2012 ident: e_1_2_9_44_1 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv Neural Inf Proces Syst – start-page: 5855 volume-title: IEEE international conference on computer vision (ICCV) year: 2021 ident: e_1_2_9_3_1 – ident: e_1_2_9_16_1 doi: 10.1007/978-3-030-58580-8_31 – volume: 33 start-page: 15651 year: 2020 ident: e_1_2_9_5_1 article-title: Neural sparse voxel fields publication-title: Adv Neural Inf Proces Syst – ident: e_1_2_9_18_1 doi: 10.1109/CVPR.2018.00411 – ident: e_1_2_9_7_1 doi: 10.1109/CVPR52688.2022.00538 – ident: e_1_2_9_21_1 doi: 10.1109/CVPR46437.2021.00455 – start-page: 4104 volume-title: IEEE computer vision and pattern recognition (CVPR) year: 2016 ident: e_1_2_9_29_1 – ident: e_1_2_9_39_1 doi: 10.1145/3503161.3547808 – ident: e_1_2_9_34_1 doi: 10.1109/CVPR42600.2020.00342 – ident: e_1_2_9_4_1 doi: 10.1109/ICCV48922.2021.01386 – ident: e_1_2_9_30_1 doi: 10.1109/TPAMI.2015.2439281 – ident: e_1_2_9_8_1 doi: 10.1145/3528223.3530127 – ident: e_1_2_9_12_1 doi: 10.1109/CVPR.2019.00025 – start-page: 5987 volume-title: IEEE international conference on computer vision (ICCV) year: 2021 ident: e_1_2_9_9_1 – ident: e_1_2_9_20_1 doi: 10.1109/ICCV.2019.00239 – ident: e_1_2_9_19_1 doi: 10.1109/ICCV.2019.00780 – ident: e_1_2_9_42_1 doi: 10.1016/j.robot.2021.103755 – ident: e_1_2_9_40_1 doi: 10.1109/2945.468400 – year: 2019 ident: e_1_2_9_35_1 article-title: EDVR: Video restoration with enhanced deformable convolutional networks publication-title: Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA:IEEE – ident: e_1_2_9_6_1 doi: 10.1109/CVPR46437.2021.00713 – ident: e_1_2_9_36_1 doi: 10.1109/ICCV.2017.89 – start-page: 10318 volume-title: IEEE Computer Vision and Pattern Recognition (CVPR) year: 2021 ident: e_1_2_9_23_1 – ident: e_1_2_9_38_1 doi: 10.1109/3DV.2015.66 – ident: e_1_2_9_41_1 doi: 10.1145/3306346.3322980 – ident: e_1_2_9_10_1 – ident: e_1_2_9_11_1 doi: 10.1109/CVPR52729.2023.00013 – ident: e_1_2_9_27_1 doi: 10.1109/CVPR42600.2020.00251 – ident: e_1_2_9_13_1 doi: 10.1007/978-3-030-58598-3_4 – ident: e_1_2_9_25_1 doi: 10.1109/ICCV48922.2021.01271 – ident: e_1_2_9_17_1 doi: 10.1109/CVPR42600.2020.00356 – ident: e_1_2_9_14_1 doi: 10.1109/CVPR.2019.00609 – ident: e_1_2_9_15_1 doi: 10.1109/CVPR.2019.00459 – ident: e_1_2_9_37_1 doi: 10.1007/s11263-013-0654-8 – ident: e_1_2_9_32_1 doi: 10.1109/ICCV.2017.481 – ident: e_1_2_9_22_1 doi: 10.1109/ICCV48922.2021.00581 – ident: e_1_2_9_2_1 doi: 10.1145/3503250 – start-page: 184 volume-title: European Conference on Computer Vision (ECCV), Zurich, Switzerland: Springer year: 2014 ident: e_1_2_9_28_1 – start-page: 4681 volume-title: IEEE computer vision and pattern recognition (CVPR) year: 2017 ident: e_1_2_9_26_1 – start-page: 27 volume-title: Generative adversarial nets. Advances in neural information processing systems year: 2014 ident: e_1_2_9_31_1 – ident: e_1_2_9_24_1 doi: 10.1109/ICCV48922.2021.01490 |
SSID | ssj0026210 |
Score | 2.3450737 |
Snippet | Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint‐dependent effects. However, less work has been devoted to exploring its limitations in... |
SourceID | proquest crossref wiley |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | Data smoothing deformable convolution net Deformation effects Formability High resolution Misalignment neural radiance fields Rendering Visual effects voxel‐based embedding |
Title | De‐NeRF: Ultra‐high‐definition NeRF with deformable net alignment |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2240 https://www.proquest.com/docview/3071608142 |
Volume | 35 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3LSsNAFB2kbnThW6zWEkHcpU0mk5e70lqL2C6qLQUXIfOIFEuUJnXhyk_wG_0S783DqiCIqyGTDGTuzL1zZjj3DCGnArzIdCXVTd9VOpOMgUspX_d8QzgAULkdYaJwf-D0RuxqYk8KViXmwuT6EJ8HbugZWbxGBw950lyKhorwuYHrEYRfpGohHhp-KkdRh-ZCBDZzdNwllLqzBm2WDb-vREt4-RWkZqtMd5Pclf-Xk0seGouUN8TLD-nG_3Vgi2wU4FNr5bNlm6yoeIesj6fJIq9NdsllR72_vg3UsHuujWbpPIQnlDSGQqpoGmcMLw3fa3iEq0Elol4-U1qsUg1Q_X3GL9gjo-7FbbunF5ct6IIiOcJm3PJU5IRU-kJgblNkOQLGjsMWx4ioEsqWgksZesyyIuFJ16bSES53ARXavrVPKvFjrA6IBq2UJ_zIZFQwl_uhL5g0uDTw8kCbsio5KQ0fPOWaGkGunkwDMEqARqmSWjkiQeFVSQDxyHQAwzBaJWeZaX9tH7RbYywP__rhEVmjgFdyLmONVNL5Qh0D3kh5nay2Ov3rm3o2wz4A3UvUTg |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NTsJAEJ4gHtSD_0YUtSbGW7Fsd7etngiKqMCBAOFg0nR_aoikGigePPkIPqNP4m5LQU1MjKdNt52kO7uz881m9huAE66sqOwIZJY9R5pYYKxMSnqm61mcKoDKSKgvCjdbtN7Ft33Sz8FFdhcm5YeYHbhpy0j2a23g-kD6bM4ayoOXknZIC7CoC3on8VR7xh2FKEqpCAimpo4TMuZZC51lkt990RxgfoWpiZ-prcF99odpesljaRKzEn_9Qd74zyGsw-oUfxqVdMFsQE5Gm7DSG4wnae94C64v5cfbe0u2a-dGdxiPAvWkWY1VI2Q4iJIkL0O_N_QprqE6NfBlQ2lEMjYUsH9IUgy2oVu76lTr5rTegsmRzo8gmNmuDGmAhMe5vt4U2pSr6WMqyrFCJLkkgjMhAhfbdshd4RAkKHeYo4Ah8ewdyEdPkdwFQ0lJl3thGSOOHeYFHsfCYsLS9QMJwgU4zjTvP6e0Gn5KoIx8pRRfK6UAxWxK_KlhjX21JZWpgjEYFeA00e2v8n610tPt3l8_PIKleqfZ8Bs3rbt9WEYKvqSpjUXIx6OJPFDwI2aHyTL7BG3j1tU |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEB60gujBt1itGkG8pSabzctbaa31VaTYUvAQso9IscTSph48-RP8jf4SZ5OmVUEQT0s2mZDMY_fbZeZbgGOOUWS6guim70qdCkoxpKSve77BHQSozI5UofBt02m06VXX7k6yKlUtTMYPMd1wU5GRjtcqwAciOp2RhvLwpazmo3lYoI7hKY-utabUUcQhGROBTR1dLRNy4lmDnOaS36eiGb78ilLTaaa-Cg_5B2bZJU_lccLK_PUHd-P__mANViboU6tk7rIOczLegOVObzTOekebcFGTH2_vTdmqn2ntfjIM8UpxGmMjZNSL0xQvTd3X1B6uhp0K9rK-1GKZaAjrH9MEgy1o18_vqw19ctqCzonKjrApszwZOSERPuequCmyHI7GY7jGMSIiubQFZ0KEHrWsiHvCtYlwuMtchIW2b21DIX6O5Q5oKCU97kcmJZy6zA99ToXBhKFOD7QJLcJRrvhgkJFqBBl9MglQKYFSShFKuUWCSViNAhyQTLQ1vrcIJ6lqf5UPqpWOanf_-uAhLN7V6sHNZfN6D5YIYpcsr7EEhWQ4lvuIPRJ2kDrZJy0h1Y0 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=De%E2%80%90NeRF%3A+Ultra%E2%80%90high%E2%80%90definition+NeRF+with+deformable+net+alignment&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Hou%2C+Jianing&rft.au=Zhang%2C+Runjie&rft.au=Wu%2C+Zhongqi&rft.au=Meng%2C+Weiliang&rft.date=2024-05-01&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=35&rft.issue=3&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.2240&rft.externalDBID=10.1002%252Fcav.2240&rft.externalDocID=CAV2240 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon |