I'm a research scientist at Google Research in San Francisco. Before that I got my Ph.D. in computer science from the Harvard School of Engineering and Applied Sciences, where I was advised by Todd Zickler.
Previously, I received a double B.Sc. in physics and in electrical engineering from Tel Aviv University, after which I spent a couple of years as a researcher at Camerai (acquired by Apple), working on real-time computer vision algorithms for mobile devices.
I am interested in computer vision and graphics. More specifically, I enjoy working on methods that use domain-specific knowledge (e.g. from geometry and physics) to improve the efficiency, generalizability, and interpretability of artificial vision systems.
We achieve real-time view synthesis using a volumetric rendering model with a compact representation combining a low resolution 3D feature grid and high resolution 2D feature planes.
We treat each point in space as an infinitesimal volumetric surface element. Using MC-based rendering to fit this representation to a collection of images yields accurate geometry, materials, and illumination.
We modify NeRF's representation of view-dependent appearance to improve its representation of specular appearance, and recover accurate surface normals. Our method also enables view-consistent scene editing.
By modeling each patch in an image as a generalized junction, our model uses concurrencies between different boundary elements such as junctions, corners, and edges, and manages to extract boundary structure from extremely noisy images where previous methods fail.
We present a simple condition for the uniqueness of a solution to the shape from texture problem. We show that in the general case four views of a cyclostationary texture satisfy this condition and are therefore sufficient to uniquely determine shape.
We formulate the shape from texture problem as a 3-player game. This game simultaneously estimates the underlying flat texture and object shape, and it succeeds for a large variety of texture types.