Dor Verbin

I'm a research scientist at Google DeepMind in San Francisco. Before that I got my Ph.D. in computer science from the Harvard School of Engineering and Applied Sciences, where I was advised by Todd Zickler.

Previously, I received a double B.Sc. in physics and in electrical engineering from Tel Aviv University, after which I spent a couple of years as a researcher at Camerai (acquired by Apple), working on real-time computer vision algorithms for mobile devices.

Email  /  Google Scholar  /  Twitter  /  Github

profile photo
Research

I am interested in computer vision and graphics. More specifically, I enjoy working on methods that use domain-specific knowledge of geometry, graphics, and physics, to improve computer vision systems.

NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
Dor Verbin, Pratul Srinivasan, Peter Hedman, Benjamin Attal, Ben Mildenhall, Richard Szeliski, Jonathan T. Barron
SIGGRAPH Asia, 2024
project page / arXiv

Casting reflection cones inside NeRF lets us synthesize photorealistic specularities in real-world scenes.

Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering
Benjamin Attal, Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Matthew O'Toole, Pratul P. Srinivasan
ECCV, 2024   (Oral Presentation)
project page / arXiv

Using radiance caches, importance sampling, and control variates helps reduce bias in inverse rendering, resulting in better estimates of geometry, materials, and lighting.

IllumiNeRF: 3D Relighting without Inverse Rendering
Xiaoming Zhao, Pratul Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler
arXiv, 2024
project page / arXiv

We do 3D relighting by sampling from a single-image relighting diffusion model, and distilling it into a latent-variable NeRF.

Eclipse: Disambiguating Illumination and Materials using Unintended Shadows
Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd Zickler, Pratul Srinivasan
CVPR, 2024   (Oral Presentation)
project page / video / arXiv

Shadows cast by unobserved occluders provide a cue for recovering materials and high-frequency illumination from images of diffuse objects.

ReconFusion: 3D Reconstruction with Diffusion Priors
Rundi Wu*, Ben Mildenhall*, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski*
CVPR, 2024
project page / arXiv

We finetune an image diffusion model to accept multiview inputs, then use it to regularize radiance field reconstruction.

Generative Powers of Ten
Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steve Seitz, Ira Kemelmacher, Ben Mildenhall, Pratul P. Srinivasan, Dor Verbin, Aleksander Holynski
CVPR, 2024   (Highlight)
project page / arXiv

We use a generative text-to-image model for creating videos with extreme zoom ins.

Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
Christian Reiser, Stephan J. Garbin, Pratul Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman*, Andreas Geiger*
SIGGRAPH, 2024
project page / video / arXiv

Applying anti-aliasing to a discrete opacity grid lets you render a hard representation into a soft image, and this enables highly-detailed mesh recovery.

Boundary Attention: Learning to Find Faint Boundaries at Any Resolution
Mia G. Polansky, Charles Herrmann, Junhwa Hur, Deqing Sun, Dor Verbin, Todd Zickler,
arXiv, 2023
project page / arXiv

We design a new boundary-aware attention mechanism to quickly find boundaries at images with low SNRs. The output is similar to the field of junctions, but this works ~100x faster!

Nuvo: Neural UV Mapping for Unruly 3D Representations
Pratul P. Srinivasan, Stephan J. Garbin, Dor Verbin, Jonathan T. Barron, Ben Mildenhall
ECCV, 2024
project page / video / arXiv

We use neural fields to recover editable UV mappings for challenging geometry (e.g. volumetric representations like NeRF or DreamFusion, or meshes extracted from them).

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul Srinivasan, Peter Hedman
ICCV, 2023   (Oral Presentation, Best Paper Finalist)
project page / video / arXiv

We combine mip-NeRF 360 and Instant NGP to reconstruct very large scenes.

Neural Microfacet Fields for Inverse Rendering
Alexander Mai, Dor Verbin, Falko Kuester, Sara Fridovich-Keil
ICCV, 2023
project page / video / code / arXiv

We treat each point in space as an infinitesimal volumetric surface element. Using MC-based rendering to fit this representation to a collection of images yields accurate geometry, materials, and illumination.

BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
Lior Yariv*, Peter Hedman*, Christian Reiser, Dor Verbin, Pratul Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
SIGGRAPH, 2023
project page / video / arXiv

We achieve real-time view synthesis by baking a high quality mesh and fine-tuning a lightweight appearance model on top.

MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
Christian Reiser, Richard Szeliski, Dor Verbin, Pratul Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
SIGGRAPH, 2023
project page / video / arXiv

We achieve real-time view synthesis using a volumetric rendering model with a compact representation combining a low resolution 3D feature grid and high resolution 2D feature planes.

Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul Srinivasan
CVPR, 2022   (Oral Presentation, Best Student Paper Honorable Mention)
project page / arXiv / code / video

We modify NeRF's representation of view-dependent appearance to improve its representation of specular appearance, and recover accurate surface normals. Our method also enables view-consistent scene editing.

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul Srinivasan, Peter Hedman
CVPR, 2022   (Oral Presentation)
project page / arXiv / code / video

We extend mip-NeRF to produce realistic results on unbounded scenes.

Field of Junctions: Extracting Boundary Structure at Low SNR
Dor Verbin, Todd Zickler
ICCV, 2021
project page / arXiv / code / video

By modeling each patch in an image as a generalized junction, our model uses concurrencies between different boundary elements such as junctions, corners, and edges, and manages to extract boundary structure from extremely noisy images where previous methods fail.

Unique Geometry and Texture from Corresponding Image Patches
Dor Verbin, Steven J. Gortler, Todd Zickler
TPAMI, 2021
arXiv / paper / video (combined with SfT)

We present a simple condition for the uniqueness of a solution to the shape from texture problem. We show that in the general case four views of a cyclostationary texture satisfy this condition and are therefore sufficient to uniquely determine shape.

Toward a Universal Model for Shape from Texture
Dor Verbin, Todd Zickler
CVPR, 2020
project page / paper / supplement / code and data / video

We formulate the shape from texture problem as a 3-player game. This game simultaneously estimates the underlying flat texture and object shape, and it succeeds for a large variety of texture types.


I'm also using Jon's website template.