CSC2530 (Spring 2021)

Computational Imaging and 3D Sensing

General Information

Professor: Kyros Kutulakos

Office hour: T 1-2pm (online)

TA: Parsa Mirdehghan

Lectures: W 2-5pm (online synchronous)

Speaker Planning Session: F noon-2pm (online)

Quercus entry | Course dropbox | Discord | MarkUS link for coursework submissions

About the Course

From the advanced cameras in today's smartphones to the 3D sensors of next-gen autonomous vehicles, computational imaging

has already began to revolutionize the way we sense the world in 2D, 3D and beyond. This, however, is just the beginning: microscopes the size of a penny, cameras that can see around corners, and the world's first image of a black hole are already a reality, have resulted from the increased convergence of optics, sensing and computation.

This seminar course will cover the foundations of the field of computational imaging as well as survey some of the latest advances in the field. Specific topics will include high-speed imaging, lensless imaging, optical coding, advanced CMOS sensors, time-of-flight imaging, single-photon imaging, non-line-of-sight imaging, and computational microscopy.

Textbooks & other reading materials: There is no textbook for the course. All papers on the list below can be found on the course dropbox.

Prerequisites: Prior exposure to either computer vision, computer graphics, sensing, optics or numerical optimization is desirable but not required.

Marking Scheme

- Paper presentations (30%)

- Weekly reaction reports (20%)
- Class project (50%)

Weekly reaction reports (due 2pm each Wed): written individually & submitted each week a student's team is not presenting. Sample reaction reports can be found on dropbox.

Presentations: each paper in the readings is presented by a team of 3 students

Class project: can be individual or group-based; project proposals & project due date TBA

Lecture Calendar (tentative, see course dropbox for paper PDFs)

Part I: Steady-State Computational Imaging (Long Timescales Relative to Speed of Light)

Date

Topic

Readings

Supplementary Readings (optional)

Jan 13

Course Roadmap, Modern Camera Pipelines

M. S. Brown, “Understanding color & the in-camera image processing pipeline for computer vision," IEEE ICCV Tutorial Notes, 2019, pp. 1–246.

Jan 20

Conventional Cameras & their Limits

[R01] A. Abdelhamed, M. A. Brubaker, and M. S. Brown, “Noise Flow: Noise Modeling With Conditional Normalizing Flows,” Proc. IEEE ICCV, 2019, pp. 3165–3173.

[R02] A. Lohmann, “Scaling laws for lens systems,” Appl Optics, 1989.

[R03] O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J Opt Soc Am A, vol. 28, no. 12, pp. 2540–2553, 2011. Read Sections 1-5 only.

M. S. Levoy, “Lectures on Digital Photography: Noise and ISO,” 2016, pp. 1–50.

M. S. Levoy, “Lectures on Digital Photography: Optics I,” 2016, pp. 1–81.

M. S. Levoy, “Lectures on Digital Photography: Optics II,” 2016, pp. 1–48.

Jan 27

Multi-Perspective & Light-Field Imaging

[R04] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” CSTR 2005-02, Stanford, 2005.

[R05] K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” Proc. SIGGRAPH Asia 2013.

[R06] J.-X. Chai, X. Tong, S.-C. Chan, and H.-Y. Shum, “Plenoptic sampling,” Proc. ACM SIGGRAPH, 2000, pp. 307–318.

E. H. Adelson and J. Wang, “Single lens stereo with a plenoptic camera,” IEEE T-PAMI, 1992.

M. S. Levoy and P. Hanrahan, “Light field rendering,” Proc. ACM SIGGRAPH, 1996.

B. Wilburn, et al, “High performance imaging using large camera arrays,” Proc. ACM SIGGRAPH, 2005.

S. Peleg, et al, “Mosaicing on adaptive manifolds,” IEEE T-PAMI, vol. 22, no. 10, pp. 1144–1154, 2000.

R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: An approach to determining structure from motion,” Int. J. Computer Vision, vol. 1, no. 1, pp. 7–55, 1987.

T. E. Bishop and P. Favaro, “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution,” IEEE T-PAMI, 2012.

N. Wadhwa, et al, “Synthetic depth-of-field with a single-camera mobile phone,” ACM TOG, vol. 37, no. 4, pp. 64–13, Jul. 2018.

A. Punnappurath, A. Abuolaim, M. Afifi, and M. S. Brown, “Modeling Defocus-Disparity in Dual-Pixel Sensors,” IEEE Int. Conf. on Computational Photography, 2020, pp. 1–12.

Feb 3

Optical Coding I: Masks & Diffusers for High-Speed Imaging & Lenseless Imaging

M. Wei, N. Sarhangnejad, Z. Xia, N. Gusev, N. Katic, R. Genov, and K. N. Kutulakos, “Coded Two-Bucket Cameras for Computer Vision,” Proc. ECCV, 2018, pp. 54–71.

O. Cossairt, M. Gupta, and S. K. Nayar, “When Does Computational Imaging Improve Performance?,” IEEE-TIP, no. 99, p. 1, 2012.

N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from stills: Lensless imaging with rolling shutter,” presented at the Proc. IEEE Int. Conf. on Computational Photography, 2019.

S. S. Khan, V. Sundar, V. Boominathan, A. Veeraraghavan, and K. Mitra, “FlatNet: Towards Photorealistic Scene Reconstruction from Lensless Measurements,” IEEE T-PAMI, pp. 1–1, 2020.

V. Boominathan, J. K. Adams, J. T. Robinson, and A. Veeraraghavan, “PhlatCam: Designed Phase-Mask Based Thin Lensless Camera,” IEEE T-PAMI, vol. 42, no. 7, pp. 1618–1629, 2020.

[R07] D. Liu, J. Gu, Y. Hitomi, M. Gupta, and S. K. Nayar, “Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging,” IEEE T-PAMI, vol. 36, no. 2, pp. 248–260, 2014.

[R08] M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: Thin, Lensless Cameras Using Coded Aperture and Computation,” IEEE TCI, vol. 3, no. 3, pp. 384–397, Jul. 2017.

[R09] N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica, vol. 5, no. 1, 2018.

Feb 10

Optical Coding II: Computational Lenses for High-Res Imaging & Optical Multiplexing

[R10] O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J Opt Soc Am A, vol. 28, no. 12, pp. 2540–2553, 2011. Read rest of paper.

[R11] Y. Peng, Q. Fu, F. Heide, and W. Heidrich, “The Diffractive Achromat Full Spectrum Computational Imaging with Diffractive Optics,” ACM TOG, vol. 35, no. 4, pp. –11, Jul. 2016.

[R12] Q. Guo, Z. dShi, Y.-W. Huang, E. Alexander, C.-W. Qiu, F. Capasso, and T. Zickler, “Compact single-shot metalens depth sensors inspired by eyes of jumping spiders,” Proceedings of the National Academy of Sciences, vol. 116, no. 46, pp. 22959–22965, Nov. 2019.

R. J. Lin, et al, “Achromatic metalens array for full-colour light-field imaging,” Nat. Nanotechnol., vol. 14, no. 3, pp. 227–231, Mar. 2019.

Feb 24

Exploiting Coherence for High-Performance Imaging

[R13] J. Holloway, Y. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, “SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Science Advances, vol. 3, no. 4, Apr. 2017.

[R14] Y. Wu, M. K. Sharma, and A. Veeraraghavan, “WISH: wavefront imaging sensor with high resolution,” Light: Science & Applications 2019 8:1, vol. 8, no. 1, p. 44, May 2019.

[R15] K. L. Bouman, “Extreme imaging via physical model inversion : seeing around corners and imaging black holes,” Massachusetts Institute of Technology, 2017. Chapters 2,3 only.

Part II: Computational Single-Photon Imaging & Event-Based Imaging

Mar 3

Imaging with SPADs and DVS cameras

[R17] A. Ingle, A. Velten, and M. Gupta, “High Flux Passive Imaging with Single-Photon Sensors,” Proc. IEEE CVPR, 2019.

[R18] S. Ma, S. Gupta, A. C. Ulku, C. Bruschini, E. Charbon, and M. Gupta, “Quanta burst photography,” ACM TOG, vol. 39, no. 4, p. 1, Jul. 2020.

[R19] H. Kim, S. Leutenegger, and A. J. Davison, “Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera,” presented at the Proc. ECCV, Cham, 2016, vol. 9910, no. 10, pp. 349–364.

N. Matsuda, O. Cossairt, and M. Gupta, “MC3D: Motion Contrast 3D Scanning,” in Proc. IEEE ICCP, 2015, pp. 1–10.

Part III: Computational Time-Resolved Imaging (Short Timescales Relative to Speed of Light)

Mar 10

Impulse & Continuous-Wave Time-of-Flight 3D Sensing

R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron., vol. 37, no. 3, pp. 390–397, 2001.

A. Kadambi and R. Raskar, “Rethinking Machine Vision Time of Flight With GHz Heterodyning,” IEEE Access, vol. 5, pp. 26211–26223, 2017.

A. K. Pediredla, A. C. Sankaranarayanan, M. Buttafava, A. Tosi, and A. Veeraraghavan, “Signal processing based pile-up compensation for gated single-photon avalanche diodes,” arXiv.org. 2018.

[R20] A. Kirmani, D. Venkatraman, D. Shin, A. Colaco, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “First-Photon Imaging,” Science, vol. 343, no. 6166, pp. 58–61, Jan. 2014.

[R21] A. Gupta, A. Ingle, and M. Gupta, “Asynchronous Single-Photon 3D Imaging,” Proc. IEEE ICCV, 2019, pp. 7909–7918.

[R22] M. Gupta, A. Velten, S. K. Nayar, and E. Breitbach, “What are optimal coding functions for time-of-flight imaging?,” ACM TOG, vol. 37, no. 2, 2018.

Part IV: Computational Light Transport

Mar 17

Steady-State Computational Light Transport

M. O'Toole, R. Raskar, and K. N. Kutulakos, “Primal-dual Coding to Probe Light Transport,” Proc. ACM SIGGRAPH, 2012.

S. Achar, J. R. Bartels, W. L. '. Whittaker, K. N. Kutulakos, and S. G. Narasimhan, “Epipolar time-of-flight imaging,” ACM TOG (SIGGRAPH), vol. 36, no. 4, 2017.

M. Sharma, C. A. Metzler, S. Nagesh, O. Cossairt, R. G. Baraniuk, and A. Veeraraghavan, “Inverse Scattering via Transmission Matrices: Broadband Illumination and Fast Phase Retrieval Algorithms,” IEEE TCI, vol. PP, no. 99, pp. 1–1, May 2019.

H. Kubo, S. Jayasuriya, T. Iwaguchi, T. Funatomi, Y. Mukaigawa, and S. G. Narasimhan, “Programmable Non-Epipolar Indirect Light Transport: Capture and Analysis,” IEEE T-VCG, vol. PP, no. 99, pp. 1–1, Oct. 2019.

[R22] M. O'Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous codes for energy-efficient illumination and imaging,” ACM TOG (SIGGRAPH), vol. 34, no. 4, 2015.

[R23] S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat Commun, vol. 1, p. 81, 2010.

[R24] M. Sheinin, Y. Y. Schechner, and K. N. Kutulakos, “Computational Imaging on the Electric Grid,” IEEE CVPR, 2017.

Mar 24

Time-Resolved Computational Light Transport

[R25] A. Velten, Di Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, “Femto-photography: capturing and visualizing the propagation of light,” ACM TOG, vol. 32, no. 4, 2013.

[R26] A. Kotwal, A. Levin, and I. Gkioulekas, “Interferometric transmission probing with coded mutual intensity,” ACM TOG, vol. 39, no. 4, Jul. 2020.

[R27] D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat Commun, vol. 11, no. 1, pp. 1–8, Sep. 2020.

Mar 31

Non-Line-of-Sight Imaging

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat Commun, vol. 3, pp. 745–745, Mar. 2012.

M. O'Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature, vol. 555, no. 7696, pp. 338–341, Mar. 2018.

K. L. Bouman, “Extreme imaging via physical model inversion : seeing around corners and imaging black holes,” Massachusetts Institute of Technology, 2017. Chapter 6.

[R28] S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, and I. Gkioulekas, “A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction,” presented at the Proc. IEEE CVPR, 2019, pp. 1–10.

[R29] D. B. Lindell, G. Wetzstein, and M. O'Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM TOG (SIGGRAPH), vol. 38, no. 4, pp. 116–13, Jul. 2019.

[R30] X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 1967 216:5111, vol. 572, no. 7771, pp. 620–623, Aug. 2019.

Part V: Computational Imaging Meets Deep Learning

Apr 7

Computational Imaging Meets Deep Learning

A. Chakrabarti, “Learning Sensor Multiplexing Design through Back-propagation,” presented at the Proc. Neural Information Processing Systems, 2016, pp. 3081–3089.

E. Tseng, F. Yu, Y. Yang, F. Mannan, K. Arnaud, D. Nowrouzezahrai, J.-F. Lalonde, F. Heide, “Hyperparameter optimization in black-box image processing using differentiable proxies,” ACM TOG, vol. 38, no. 4, pp. 1–14, Jul. 2019.

G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljačić, C. Denz, D. A. B. Miller, and D. Psaltis, “Inference in artificial intelligence with deep optics and photonics,” Nature 1967 216:5111, vol. 588, no. 7836, pp. 39–47, Dec. 2020.

A. Ozcan, G. Barbastathis, and G. Situ, “On the use of deep learning for computational imaging,” Optica, vol. 6, no. 8, pp. 921–943, Aug. 2019.

[R31] Y. Peng, Q. Sun, X. Dun, G. Wetzstein, W. Heidrich, and F. Heide, “Learned large field-of-view imaging with thin-plate optics,” ACM TOG (SIGGRAPH Asia), vol. 38, no. 6, pp. 219–14, Nov. 2019.

[R32] A. Muthumbi, A. Chaware, A. Erdmann, B. Kappes, B. Judkewitz, K. Kim, K. C. Zhou, P. C. Konda, R. Chen, and R. Horstmeyer, “Learned sensing: jointly optimized microscope hardware for accurate image classification,” Biomed. Opt. Express, BOE, vol. 10, no. 12, pp. 6351–6369, Dec. 2019.

[R33] W. Chen, P. Mirdehghan, S. Fidler, and K. N. Kutulakos, “Auto-Tuning Structured Light by Optical Stochastic Gradient Descent,” Proc. IEEE CVPR, pp. 5970–5980, 2020.