Wenzheng Chen

I'm a research scientist at NVIDIA Toronto AI Lab, where I mainly work on Computational Photography and 3D Vision.

I completed my Ph.D. at the University of Toronto, under supervision of Prof. Sanja Fidler and Prof. Kyros Kutulakos. Prior to that, I earned my master's degree at IRC, Shandong University, collaborating with Prof. Yangyan Li, Prof. Changhe Tu, and Prof. Baoquan Chen. I obtained my bachelor's degree at Taishan College, Shandong University.

I interned at Algolux in 2018 summer, NVIDIA in 2018 fall and Snapchat in 2019 summer.

Email  /  Google Scholar  /  LinkedIn

profile photo
Research

Generally, my research focuses on computational photography and 3D vision. More specifically, I mainly explore how to utilize various imaging systems, including digital camera, LiDAR, structured light, and SPAD, etc, to assess a scene and predict its corresponding 3D attributes, such as 3D geometry, texture, surface material, environment light, and more.

I am particularly interested in how to differentiate imaging systems and embed them within deep learning frameworks, which allows us to leverage all kinds of optical and physical prior knowledge inside imaging systems as well as the power of deep learning methods to boost 3D perception performance.

My Ph.D. works mainly discover differentiable rendering, structured light, and non-line-of-sight imaging. In the future, I am willing to explore more broader imaging technology like time-of-flight, photometric stereo, polarization, spectrum, or even medical, astronomy, and scientific imaging. My long-term goal is to design 3D imaging software and hardware which can be used by everyone in the world.

News

  • I will join Peking University as a tenure-track Assistant Professor starting from 2024.04.
  • One paper accepted by SIGGRAPH 2023.
  • One paper accepted by ICCV 2023.
  • Three papers accepted by CVPR 2023.
  • Hiring

  • We are actively looking for interns, Masters, and PhDs. Feel free to drop me a line if you are interested in my research or potential collaborations.
  • For graduate school applicants, we have two openings for PhD students in 2025.
  • 课题组有两个2024年申请,2025年9月入学的博士名额,请有意的同学提前和我联系.
  • Selected Publication

    Representative papers are highlighted, with full publication list in Google Scholar.

    Flexible Isosurface Extraction for Gradient-Based Mesh Optimization
    Tianchang Shen, Jacob Munkberg, Jon Hasselgren, Kangxue Yin, Zian Wang, Wenzheng Chen, Zan Gojcic, Sanja Fidler, Nicholas Sharp*, Jun Gao*
    ACM Transactions on Graphics (SIGGRAPH), 2023
    project page / pdf / supp / acm / bibtex

    Boosting 3D Reconstruction with Differentiable Imaging Systems
    Wenzheng Chen
    Ph.D. Thesis, 2023

    Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes
    Zian Wang, Tianchang Shen, Jun Gao, Shengyu Huang, Jacob Munkberg, Jon Hasselgren, Zan Gojcic, Wenzheng Chen, Sanja Fidler
    CVPR, 2023
    project page / arXiv / codes / video / bibtex

    Combined with other NVIDIA technology, FEGR is one component of Neural Reconstruction Engine announced in GTC Sept 2022 Keynote.

    GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
    Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, Sanja Fidler
    NeurIPS, 2022   (Spotlight Presentation)
    project page / arXiv / codes / video / bibtex

    We develop a 3D generative model to generate meshes with textures, bridging the success in the differentiable surface modeling, differentiable rendering and 2D GANs.

    Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion
    Zian Wang, Wenzheng Chen, David Acuna, Jan Kautz, Sanja Fidler
    ECCV, 2022
    project page / arXiv / codes / video / bibtex

    We propose a hybrid lighting representation to represent spatial-varying lighting for complex outdoor street scenes.

    Extracting Triangular 3D Models, Materials, and Lighting From Images
    Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler
    CVPR, 2022   (Oral Presentation)
    project page / arXiv / codes / video / bibtex

    Nvdiffrec reconstructs 3D mesh with materials from multi-view images by combining diff surface modeling with diff renderer. The method supports Nvidia Neural Drivesim

    DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer
    Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler
    NeurIPS, 2021
    project page / arXiv / codes / video / bibtex

    DIB-R++ is a high-performant differentiable renderer which combines rasterization and ray-tracing together and supports advanced lighitng and material effects. We further embed it in deep learning and jointly predict geometry, texture, light and material from a single image.

    Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering
    Yuxuan Zhang*, Wenzheng Chen*, Jun Gao, Huan Ling, Yinan Zhang,
    Antonio Torralba Sanja Fidler (* Equal contribution)
    ICLR, 2021   (Oral Presentation)
    project page / arXiv / codes / video / bibtex

    We explore StyleGAN as a multi-view image generator and train inverse graphics from StyleGAN images. Once trained, the invere graphics model further helps disentangle and manipulate StyleGAN latent code from graphics knowledge.

    Learned Feature Embeddings for Non-Line-of-Sight Imaging and Recognition
    Wenzheng Chen*, Fangyin Wei*, Kyros Kutulakos,
    Szymon Rusinkiewicz, Felix Heide (* Equal contribution)
    SIGGRAPH Asia, 2020  
    project page / paper / codes / bibtex

    We propose to learn feature embeddings for non-line-of-sight imaging and recognition by propagating features through physical modules.

    Learning Deformable Tetrahedral Meshes for 3D Reconstruction
    Jun Gao, Wenzheng Chen, Tommy Xiang, Alec Jacobson, Morgan Mcguire, Sanja Fidler
    NeurIPS, 2020  
    project page / arXiv / codes / video / bibtex

    We predict deformable tetrahedral meshes from images or point clouds, which support arbitrary topologies. We also design a differentiable renderer for tetrahedron, allowing 3D reconstrucion from 2D supervison only.

    Auto-Tuning Structured Light by Optical Stochastic Gradient Descent
    Wenzheng Chen*, Parsa Mirdehghan*, Sanja Fidler, Kyros Kutulakos (* Equal contribution)
    CVPR, 2020  
    project page / paper / codes / video / bibtex

    We present optical SGD, a computational imaging technique that allows an active depth imaging system to automatically discover optimal illuminations & decoding.

    Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
    Wenzheng Chen, Jun Gao*, Huan Ling*, Edward J. Smith*,
    Jaakko Lehtinen, Alec Jacobson, Sanja Fidler (* Equal contribution)
    NeurIPS, 2019  
    project page / arXiv / codes / bibtex

    An interpolation-based 3D mesh differentiable renderer that supports vertex, vertex color, multiple lighting models, texture mapping and could be easily embedded in neural networks.

    Steady-state Non-Line-of-Sight Imaging
    Wenzheng Chen, Simon Daneau, Fahim Mannan, Felix Heide
    CVPR, 2019   (Oral Presentation)
    project page / arXiv / codes / bibtex

    We show hidden objects can be recovereed from conventional images instead of transient images.

    Fast Interactive Object Annotation with Curve-GCN
    Huan Ling*, Jun Gao*, Amlan Kar, Wenzheng Chen, Sanja Fidler (* Equal contribution)
    CVPR, 2019  
    project page / arXiv / codes / bibtex

    We predict object polygon contours from graph neural networks, where a novel 2D differentiable rendering loss is introduced. It renders a polygon countour into a segmentation mask and back propagates the loss to help optimize the polygon vertices.

    Optimal Structured Light a la Carte
    Parsa Mirdehghan, Wenzheng Chen, Kyros Kutulakos
    CVPR, 2018   (Spotlight Presentation)
    project page / paper / codes / bibtex

    alacarte designs structured light patterns from a maching learning persepctive, where patterns are automatically optimized by minimizing the disparity error under any given imaging condition.

    Synthesizing Training Images for Boosting Human 3D Pose Estimation
    Wenzheng Chen, Huan Wang, Yangyan Li, Hao Su, Zhenhua Wang,
    Changhe Tu, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
    3DV, 2016   (Oral Presentation)
    project page / arXiv / codes / bibtex

    3D pose estimation from model trained with synthetic data and domain adaptation.


    I stole the website template from Jon Barron.