From the advanced depth cameras in today's smartphones to the 3D sensors of next-gen autonomous vehicles, 3D sensing has already began to revolutionize the way we sense the world. This, however, is just the beginning: cameras that can see in pitch-black darkness and around corners, depth sensors with sub-micron accuracy or a range of several kilometers, and cameras that can see under the skin are already becoming a reality. Key to these developments is the rapid convergence of optics, photonics, sensing and computation.
This seminar course will provide an in-depth look at 3D sensing and related technologies, and the computational techniques used to model and exploit them. Specific topics will include "extreme imaging" with single-photon detectors; principles of time-of-flight imaging and LiDAR; structured-light imaging; 3D imaging through scattering media; and imaging around corners. Class format will be discussion based, with approximately one paper covered per week and a term project.
Recommended preparation: Graduate-level exposure to one or more of computer vision, computer graphics, sensing or numerical optimization is desirable but not required. Prior enrollment in CSC2529 (Computational Imaging) is not necessary.
Lectures: Wednesdays 9:00am-11:00am in UC244.
Instructor in-person office hour: Wednesdays noon-1:00pm (BA7270).
Instructor zoom office hour: Tuesdays noon-1pm (zoom link posted on Quercus).
Contact: Course announcements and general information will be posted Quercus. Q&A related to papers discussed in lecture will be on Piazza.
Course dropbox: The one-stop shop for all course materials. See Quercus for the link.
Role assignments & schedule: See Quercus for the link to the google sheet with all the info.
Date | Topic | Paper(s) | Event |
---|---|---|---|
Wed 18/1 |
Introduction |
Course overview, role-playing class format, course components, grading, etc. | |
Part I: Active Sensing using Classical 3D Vision Cues | |||
Wed 25/1 |
Laser Triangulation |
Main paper Levoy et al., The digital Michelangelo project: 3D scanning of large statues (Proc. SIGGRAPH 2000). |
|
Essential background
To probe further: |
|||
Wed 1/2 |
Structured-Light Triangulation |
Main paper Gupta & Nayar, Micro Phase Shifting (Proc. CVPR 2012) |
|
To probe further:
|
|||
Wed 8/2 |
Photometric Stereo |
Main paper Johnson et al, Microgeometry capture using an elastomeric sensor (Proc. SIGGRAPH 2011). |
|
To probe further:
|
|||
Wed 15/2 |
Polarization Imaging |
Main paper Kadambi et al, Depth Sensing Using Geometrically Constrained Polarization Normals (IJCV 2017). |
|
To probe further:
|
|||
Wed 22/2 |
Winter break (No Lecture) | ||
Part II: Time-Resolved 3D Sensing | Wed 1/3 |
Single-Photon LiDAR |
Main paper Kirmani et al., First-Photon Imaging (Science 2014). |
To probe further:
|
|||
Wed 8/3 |
Indirect ToF |
Main paper Gupta et al., What Are Optimal Coding Functions for Time-of-Flight Imaging? (ACM TOG 2018). |
Project proposals due at 11:59pm |
To probe further: | |||
Wed 15/3 |
Coherent ToF |
Main paper Kadambi et al, Rethinking Machine Vision Time of Flight With GHz Heterodyning (IEEE Access 2017). |
|
To probe further: | |||
Wed 22/3 |
Non-Line-of-Sight 3D Imaging |
Main paper Velten et al, Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging (Nature Communications, 2012). |
|
To probe further: | |||
Wed 29/3 |
3D Imaging Through Scatter |
Main paper Zhao et al., High Resolution, Deep Imaging Using Confocal Time-of-Flight Diffuse Optical Tomography (IEEE PAMI 2021). |
|
To probe further: | |||
Wed 5/4 |
Interferometric ToF |
Main paper Kotwal et al, Passive Micron-scale Time-of-Flight with Sunlight Interferometry (arXiv 2022). |
Final project reports due at 11:59pm |