Program

 

Yan Karklin
A factor model for learning higher-order features in natural images.

The visual system is a hierarchy of processing stages. Each stage in this pathway, in addition to encoding increasingly complex features of the input, performs complex non-linear computations. What is the functional role of these non-linear behaviors and how do we incorporate them into generative models of natural images?

A number of non-linear properties of visual neurons can be predicted from the statistical dependencies observed in natural images. For example, the magnitudes of linear filter outputs are correlated; normalizing filter responses removes this correlation (making the responses more independent and marginally Gaussian) and reproduces neural gain control. In addition, the pattern in these correlations is itself highly informative, and can be used to infer the context of patches sampled from a large scene. Here I will focus on these statistical patterns and describe a generative model that captures them using a set of factors in the space of log-covariance of a multivariate Gaussian distribution. Trained on natural images, the model learns a compact code for correlations observed in pixel (or linear feature) distributions that represents more abstract properties of the image. I will also connect this work to recent generative models that incorporate multiplicative interactions between observed and latent variables.


Brief Bio.

Yan Karklin received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Mike Lewicki. At CMU he was also affiliated with the Center for Neural Basis of Perception. Since 2008 he has been a post-doctoral fellow at New York University and Howard Hughes Medical Institute, working with Eero Simoncelli. His interests lie in computational models of processing in visual cortex, natural image statistics, and hierarchical statistical modeling.