C3-NeRF: modeling multiple scenes via conditional-cum-continual neural radiance fields

Show simple item record

dc.contributor.author Singh, Prajwal
dc.contributor.author Tiwari, Ashish
dc.contributor.author Vashishtha, Gautam
dc.contributor.author Raman, Shanmuganathan
dc.coverage.spatial United States of America
dc.date.accessioned 2024-12-12T05:11:33Z
dc.date.available 2024-12-12T05:11:33Z
dc.date.issued 2024-11
dc.identifier.citation Singh, Prajwal; Tiwari, Ashish; Vashishtha, Gautam and Raman, Shanmuganathan, "C3-NeRF: modeling multiple scenes via conditional-cum-continual neural radiance fields", arXiv, Cornell University Library, DOI: arXiv:2411.19903, Nov. 2024.
dc.identifier.uri https://arxiv.org/pdf/2411.19903
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10841
dc.description.abstract Neural radiance fields (NeRF) have exhibited highly photorealistic rendering of novel views through per-scene optimization over a single 3D scene. With the growing popularity of NeRF and its variants, they have become ubiquitous and have been identified as efficient 3D resources. However, they are still far from being scalable since a separate model needs to be stored for each scene, and the training time increases linearly with every newly added scene. Surprisingly, the idea of encoding multiple 3D scenes into a single NeRF model is heavily under-explored. In this work, we propose a novel conditional-cum-continual framework, called C3-NeRF, to accommodate multiple scenes into the parameters of a single neural radiance field. Unlike conventional approaches that leverage feature extractors and pre-trained priors for scene conditioning, we use simple pseudo-scene labels to model multiple scenes in NeRF. Interestingly, we observe the framework is also inherently continual (via generative replay) with minimal, if not no, forgetting of the previously learned scenes. Consequently, the proposed framework adapts to multiple new scenes without necessarily accessing the old data. Through extensive qualitative and quantitative evaluation using synthetic and real datasets, we demonstrate the inherent capacity of the NeRF model to accommodate multiple scenes with high-quality novel-view renderings without adding additional parameters. We provide implementation details and dynamic visualizations of our results in the supplementary file.
dc.description.statementofresponsibility by Prajwal Singh, Ashish Tiwari, Gautam Vashishtha and Shanmuganathan Raman
dc.language.iso en_US
dc.publisher Cornell University Library
dc.title C3-NeRF: modeling multiple scenes via conditional-cum-continual neural radiance fields
dc.type Article
dc.relation.journal arXiv


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account