Confocal Stereo

We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible in-focus scene point will vary in a scene-independent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). This allows us to assemble an AxF aperture-focus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes, as well as for a scene with known ground-truth shape.
Kiriakos N. Kutulakos (University of Toronto)
Samuel W. Hasinoff (University of Toronto)
Related Publications
Samuel W. Hasinoff and Kiriakos N. Kutulakos, Confocal Stereo. Proc. 9th European Conf. on Computer Vision, Graz, Austria, pp. 620-634, 2006. Winner, Best Paper Honorable Mention.   PDF (3591KB)    SpringerLink entry
Samuel W. Hasinoff and Kiriakos N. Kutulakos, Confocal Stereo. Int. J. Computer Vision, vol. 81, no. 1, pp. 82-104, 2009. Special Issue on ECCV 2006 Best Papers.   PDF (4104KB)    SpringerLink entry
Supplementary Material

videos presented at ECCV 2006

  • Illustration of confocal constancy [DivX, 0.5MB]
  • Note how the appearance of the in-focus points on the light-colored foreground strand remains constant across aperture setting, while it varies for the out-of-focus points.
  • Alignment across focus setting [DivX, 1.5MB]
  • Alignment across aperture setting [DivX, 0.5MB]
  • Reconstruction results [DivX, 115MB]
  • See the accompanying transcript from the talk.

"hair" dataset

Our first test scene was a wig with a messy hairstyle, approximately 25cm tall, surrounded by several artificial plants. Reconstruction results for this scene show that our confocal constancy criteria lead to very detailed depth maps, at the resolution of individual strands of hair, despite the scene's complex geometry and despite the fact that depths can vary greatly within small image neighborhoods (e.g., toward the silhouette of the hair). By comparison, the 3x3 variance operator produces uniformly-lower resolution results, and generates smooth "halos" around narrow geometric structures like individual strands of hair. In many cases, these "halos" are larger than the width of the spatial operator, as blurring causes distant points to influence the results.

In low-texture regions, such as the cloth flower petals and leaves, fitting a model to the entire AFI allows us to exploit defocused texture from nearby scene points. Window-based methods like variance, however, generally yield even better results in such regions, because they propagate focus information from nearby texture more directly, by implicitly assuming a smooth scene geometry. Like all focus measures, those based on confocal constancy are uninformative in extremely untextured regions, i.e., when the AFI is constant. However, by using the proposed confidence measure, we can detect many of these low-texture pixels. To better visualize the result of filtering out these pixels, we replace them using a simple variant of PDE-based inpainting (Bertalmio et al., 2000).

"box" dataset

To quantify reconstruction accuracy, we used a tilted planar scene consisting of a box wrapped in newsprint. The plane of the box was measured with a FaroArm Gold 3D touch probe whose single-point accuracy was +-0.05mm in the camera's workspace. To relate probe coordinates to coordinates in the camera's reference frame we used the Camera Calibration Toolbox for Matlab along with further correspondences between image features and 3D coordinates measured by the probe. A similar procedure was used to estimate the mapping between focal settings and the depth of in-focus points, i.e., the dist(.) function in Eq. (10).

"plastic" dataset

Our third test scene was a rigid, near-planar piece of transparent plastic, formerly used as packaging material, which was covered with dirt, scratches, and fingerprints. This plastic object was placed in front of a dark background and lit obliquely to enhance the contrast of its limited surface texture. Reconstruction results for this scene illustrate that at high resolution, even transparent objects may have enough fine-scale surface texture to be reconstructed using focus- or defocus-based techniques. In general, wider baseline methods like standard stereo cannot exploit such surface texture easily because textured objects behind the transparent surface may interfere with matching.

"teddy" dataset

Our final test scene was captured using low-quality camera equipment, using one of earliest digital SLR cameras (the Canon EOS 10D), with a low-quality zoom lens. The scene consists of a teddy bear with coarse fur, seated in front of a hat and several cushions, with a variety of ropes in the foreground. Since little of this scene is composed of the fine pixel-level texture found in previous scenes, this final dataset provides an additional test for low-texture areas.

This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under the RGPIN program, and by an Ontario Premier's Research Excellence Award.
Site last modified on Wednesday, June 6, 2018
Send questions or comments about this page to Kyros Kutulakos