Schedule

Part I: Foundations
DateTopicPapers
Thu, Sep 13    Introduction
 
Light fields & plenoptic representations: Idealized (pinhole) imaging
   
  • [Kyros]:   E. Adelson and J. Bergen,  The plenoptic function and the elements of early vision,  Computational Models of Visual Processing, M. Landy and J. A. Movshon (eds), 1991, pp. 3-20
      Read this paper first. The important chapters are those entitled "The Plenoptic Function" and "Plenoptic Structures". Be sure to study Figs 1.4-1.6 carefully and understand them well: they are very important, and will come up several times during the course, including next week. The later chapters are less relevant for this course and you may skip them on first reading.
  • [Kyros]:   M. Levoy and P. Hanrahan,  Light field rendering,  Proc. SIGGRAPH 1996
      This is the paper that introduced light fields on the graphics (and vision) community. Be sure to study Sections 3.1 and 3.2 and as well as Fig.7. If you have questions, team up with a classmate or send me email. For a historical perspective, look at the slides below from Todor Georgiev's talk about the long history behind this concept.
  • [Kyros]:   S. Peleg, B. Rousso, A. Rav-Acha, and A. Zomet,  Mosaicing on Adaptive Manifolds,  IEEE Trans. PAMI, v. 22, n. 10 (2000), 1144-1154
      This is the paper that introduced the idea of sweep panoramas I demonstrated in class with a Sony pocket camera. You should focus on the simple case of horizontal motion in Section 3.1 and skip the discussion of the more complex (and much harder to realize in practice) cases of forward and/or curved motion (Figs.8-11 and Secs 3.2-3.4).
  • [Kyros]:   S. Peleg, M. Ben-Ezra, Y. Pritch,  Omnistereo: Panoramic Stereo Imaging,  IEEE Trans. PAMI, v. 23, n. 3 (2001), 279-290
      And this is the paper that extended the concept of strip panoramas to the case of stereoscopic viewing. Recent Sony pocket cameras implement this functionality as well, calling it a 3D sweep panorama. Figs. 3,4,7 illustrate the key idea; you can skip Sections 4-7 unless you are really interested in this topic and want to delve a lot deeper into the subject.


Additional background:
  • C. Zhou and S. K. Nayar,  Computational Cameras: Convergence of Optics and Processing,  IEEE Trans. Image Processing, v.20, n.12, 2011
      This is one of two excellent recent surveys of the current state of the art in designing "computational cameras". Read them to get a very broad overview of the field and what has been accomplished in the last 10 or so years. This particular paper organizes prior work in terms of the coding strategies that were used. We will study these strategies in more detail in the final 1/3rd section of the course.
  • G. Wetzstein, I. Ihrke, D. Lanman, W. Heidrich,  Computational Plenoptic Imaging,  Computer Graphics Forum, 2011
      This is the second of the two papers. This survey organizes prior work in terms of specific tasks (ie. specific camera limitations) and the acquisition schemes used to overcome them.
  • M. Levoy,  CS 448A - Computational Photography,  Stanford University
      A graduate course giving problem-based overview of computational photography. Be sure to look at lecture slides from the first 6 lectures, which offer an in-depth look at digital digital cameras, lenses, sensors and their limitations.
  • S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen,  The Lumigraph,  Proc. SIGGRAPH 1996
      This paper appeared concurrently with the Light Field paper and described essentially the same idea. It's emphasis is primarily on issues related to sampling and acquisition of light fields with a conventional camera. I highly recommend you read Section 2.1 and look at Figures 1-4 in conjunction with reading the Light Field paper. Figure 7 also shows a very nice visualization of light field slices, much like Adelson and Bergen did in their Figs 1.4-1.6. The rest of the paper requires a decent background in wavelets so read it only if you are familiar with them.
Thu, Sep 20    Light fields & plenoptic representations: Imaging with finite apertures   
  • [Kyros]:   E. Adelson and J. Wang,  Single lens stereo with a plenoptic camera,  IEEE Trans. PAMI, v.14, n.2, 1992
      The modern pre-cursor of the Lytro camera, appearing almost 15 years prior to Ng's work. The camera was designed for stereoscopic viewing, rather than refocusing. Read this paper first, esp. Sections 1-4, and make sure to thoroughly internalize Figs.4,5,6 and the thin-lens equations (1)-(4). You can then jump ahead to Ng's thesis below, and read that paper in conjunction with remaining sections of this paper.
  • [Kyros]:   R Ng,  Digital light field photography,  PhD Dissertation, Stanford University
      This dissertation describes the theory and design behind the Lytro light field camera (Ren Ng founded Lytro after graduating). You should read Chapters 1-4 which describe the principles behind focusing, refocusing and the camera's design in a detailed but very accessible way. This thesis received the ACM Doctoral Dissertation Award and is a must-read for anyone interested in computational photography!
  • [Kyros]:   C. Perwass and L. Wietzke,  Single Lens 3D-Camera with Extended Depth-of-Field,  Proc SPIE: Human Vision and Electronic Imaging, v. 8291, pp. 1-15 (Feb. 2012)
      This paper describes the principle behind another commercially-available light field camera designed by Raytrix. Although the camera also uses a microlens array, it relies on a slightly different operating principle both for the design of the microlenses and for its digital refocusing algorithms.


Additional background:
  • A. Torralba and W. T. Freeman,  Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture,  Proc. CVPR 2012
      A very interesting generalization of the Camera Obsura room photography idea I showed in class. In this paper, a regular window acts as a wide-aperture (but lensless) camera. The paper describes several neat tricks that one can use to "see the unseen"---what lies outside a window that illuminates a room from just images of the room itself. You can skip Sections 3.2-3.4 if you don't have the necessary vision background.
  • R Ng, Marc Levoy, M Bredif, G Duval, and Mark Horowitz,  Light field photography with a hand-held plenoptic camera,  Stanford Tech Report CTSR 2005-02
      This is the original TR detailing Ren Ng's light field camera. I am listing here for reference, all the material here is covered better in his thesis.
  • B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy,  High Performance Imaging Using Large Camera Arrays,  Proc. ACM Siggraph 2005
      Before Ng's design, the only way of capturing a light field was either by moving a single camera or by using an array of them. Here the array version is described, along with several applications. I will briefly talk about this paper in class next week; you should skim it in advance (esp. Sections 1,2,6) but careful reading is not required.
Thu, Sep 27    Sensor limitations: noise, color and dynamic range   

Additional background:
  • M. Levoy,  Sensors & Noise,  Lecture slides, CS448A, Stanford University, 2010
Thu, Oct 4    Edge-aware processing   

Additional background:
Thu, Oct 11    Volumetric space-time processing   

Additional background:
Thu, Oct 18    Image priors I: "internal" priors   

Additional background:
Thu, Oct 25    Image priors II: "external" priors   

Additional background:
Thu, Nov 1    Discrete & Continuous Optimization   
  • A. Levin, A. Rav-Acha, and D. Lischinski,  Spectral Matting,  IEEE Trans. PAMI, v.30, n.10, 2008 (WRR paper)


Additional background:
Thu, Nov 8    Light Field Cameras   

Additional background:
Thu, Nov 15    Coding and Multiplexing I: Mask-based codes   

Additional background:
Thu, Nov 22    Coding and Multiplexing II: Non-masking codes   

Additional background:
  • D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller,  Multiscale Gigapixel Photography,  Nature, vol. 486, pp. 386-389, 2012
Thu, Nov 29    High-performance imaging with conventional cameras   

Additional background:
  • J. Telleen, A. Sullivan, J. Yee, O. Wang, P. Gunawardane, I. Collins, and J. Davis,  Synthetic shutter speed Imaging,  Computer Graphics Forum, vol. 26, no. 3, pp. 591-598, 2007
Thu, Dec 6    Computational Illumination and Light Transport   
  • P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch,  Dual photography,  Proc. ACM Siggraph 2005 (WRR paper)


Additional background:
Thu, Dec 13    Image Alignment and Structure from Motion   
  • I. Kemelmacher-Shlizerman, E. Shechtman, R. Garg, S. M. Seitz,  Exploring Photobios,  Proc. ACM Siggraph 2011 (WRR paper)


Additional background:

 
 

Site last modified on Tuesday, November 20, 2012
Send questions or comments about this page to kyros@cs.toronto.edu