Decomposing an object's appearance into representations of its materials and the surrounding illumination is difficult, even when the object's 3D shape is known beforehand. This problem is ill-conditioned because diffuse materials severely blur incoming light, and is ill-posed because diffuse materials under high-frequency lighting can be indistinguishable from shiny materials under low-frequency lighting. We show that it is possible to recover precise materials and illumination—even from diffuse objects—by exploiting unintended shadows, like the ones cast onto an object by the photographer who moves around it. These shadows are a nuisance in most previous inverse rendering pipelines, but here we exploit them as signals that improve conditioning and help resolve material-lighting ambiguities. We present a method based on differentiable Monte Carlo ray tracing that uses images of an object to jointly recover its spatially-varying materials, the surrounding illumination environment, and the shapes of the unseen light occluders who inadvertently cast shadows upon it.



We use the shadows cast by a camera operator when capturing a scene to disambiguate between the objects' materials and the lights illuminating them, and to enable recovering high-frequency illumination content which is usually impossible to recover from diffuse objects. We model the illumination as an environment map masked by a temporally-varying occluder, and we recover the spatially-varying parameters of the BRDF on the surface, the illumination, and the occluder shape corresponding to every image.


We show that the occlusion cue is sufficient for recovering accurate illumination, materials, and occluder shapes, and that our method works on a variety of shapes with different textures, even when the objects are highly diffuse.


The website template was borrowed from Michaël Gharbi and Ref-NeRF.