In this paper, we focus on the problem of rendering novel views from a Neural Radiance Field (NeRF) under unobserved light conditions. To this end, we introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world objects under one-light-at-time (OLAT) conditions, annotated with accurate ground-truth camera and light poses. Our acquisition pipeline leverages two robotic arms holding, respectively, a camera and an omni-directional point-wise light source. We release a total of 20 scenes depicting a variety of objects with complex geometry and challenging materials. Each scene includes 2000 images, acquired from 50 different points of views under 40 different OLAT conditions. By leveraging the dataset, we perform an ablation study on the relighting capability of variants of the vanilla NeRF architecture and identify a lightweight architecture that can render novel views of an object under novel light conditions, which we use to establish a non-trivial baseline for the dataset. Dataset and benchmark are available at https://eyecan-ai.github.io/rene.
ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects / Toschi, Marco; De Matteo, Riccardo; Spezialetti, Riccardo; De Gregorio, Daniele; Di Stefano, Luigi; Salti, Samuele. - ELETTRONICO. - (2023), pp. 20762-20772. (Intervento presentato al convegno IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) tenutosi a Vancouver, BC, Canada nel 17-24 June 2023) [10.1109/cvpr52729.2023.01989].
ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects
Di Stefano, Luigi;Salti, Samuele
2023
Abstract
In this paper, we focus on the problem of rendering novel views from a Neural Radiance Field (NeRF) under unobserved light conditions. To this end, we introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world objects under one-light-at-time (OLAT) conditions, annotated with accurate ground-truth camera and light poses. Our acquisition pipeline leverages two robotic arms holding, respectively, a camera and an omni-directional point-wise light source. We release a total of 20 scenes depicting a variety of objects with complex geometry and challenging materials. Each scene includes 2000 images, acquired from 50 different points of views under 40 different OLAT conditions. By leveraging the dataset, we perform an ablation study on the relighting capability of variants of the vanilla NeRF architecture and identify a lightweight architecture that can render novel views of an object under novel light conditions, which we use to establish a non-trivial baseline for the dataset. Dataset and benchmark are available at https://eyecan-ai.github.io/rene.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.