RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering

Deheng Zhang*1, 2, Jingyu Wang*1, Shaofei Wang1, Marko Mihajlovic1, Sergey Prokudin1, Hendrik P.A. Lensch2, Siyu Tang1

(* means equal contribution)

1ETH Zürich   2University of Tübingen  

TL; DR


We present RISE-SDF, a method for reconstructing the geometry and material of glossy objects while achieving high-quality relighting.


Abstract

In this paper, we propose a novel end-to-end relightable neural inverse rendering system that achieves high-quality reconstruction of geometry and material properties, thus enabling high-quality relighting. The cornerstone of our method is a two-stage approach for learning a better factorization of scene parameters. In the first stage, we develop a reflection-aware radiance field using a neural signed distance field (SDF) as the geometry representation and deploy an MLP (multilayer perceptron) to estimate indirect illumination. In the second stage, we introduce a novel information-sharing network structure to jointly learn the radiance field and the physically based factorization of the scene. For the physically based factorization, to reduce the noise caused by Monte Carlo sampling, we apply a split-sum approximation with a simplified Disney BRDF and cube mipmap as the environment light representation. In the relighting phase, to enhance the quality of indirect illumination, we propose a second split-sum algorithm to trace secondary rays under the split-sum rendering framework. Furthermore, there is no dataset or protocol available to quantitatively evaluate the inverse rendering performance for glossy objects. To assess the quality of material reconstruction and relighting, we have created a new dataset with ground truth BRDF parameters and relighting results. Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting, with particularly strong results in the reconstruction of highly reflective objects.


Method Overview

Our pipeline. The colors of the features in the figure indicate different feature concatenation combinations. (1) Given the location, the progressive hash grid with the geometry MLP predicts the geometry feature and the corresponding volume rendering weight (2) For the color representation, separate networks predict per-sample albedo, metallic, and roughness. They share the information between the direct volume rendering pipeline (black arrow) and the physically based rendering pipeline (red arrow). (3) The per-sample values (all except the blue value) are rendered via volume rendering. Then we compute the expected surface intersection and trace another ray to compute the occlusion probability. Finally, the direct and indirect colors are blended.


Comparison

Our results, compared with the state-of-the-art method NeRO[1], show superior albedo and roughness estimation with significantly less training time. As an end-to-end relightable model, our algorithm generates high-quality relighting images without noise or aliasing.

Material

We compare the material with other baseline models: NeRO[1], NMF[2], NDRMC[3]

Relighting

We compare the relighting result with other baseline models: NeRO[1], NMF[2], ENVIDR[4], GShader[5]

More Relighting

Relighting with different environment maps

[1] NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images. SIGGRAPH 2023.
[2] Neural Microfacet Fields for Inverse Rendering. ICCV 2023.
[3] Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising. NeurIPS 2022.
[4] ENVIDR: Implicit Differentiable Render with Neural Environment Lighting. ICCV 2023.
[5] GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. CVPR 2024.


Supplementary Video



Citation


                @inproceedings{zhang2025rise,
                        title={RISE-SDF: A Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering},
                        author={Zhang, Deheng and Wang, Jingyu and Wang, Shaofei and Mihajlovic, Marko and Prokudin, Sergey and Lensch, Hendrik and Tang, Siyu},
                        booktitle={International Conference on 3D Vision (3DV)},
                        year={2025}
                }