My goal is to understand and improve the algorithms that agents can use to learn from data and reason about their experience. This goal is typically framed through the the language of statistics, and solved using algorithms for probabilistic inference or optimization.
Representation learning How we represent data affects how effectively we process and understand it. I am interested in learning useful and robust representations, with a particular interest in understanding when representations are optimal. In recent work, we showed how to learn compressed representations of data with performance guarantees on a large, possibly infinite, set of downstream tasks.
Learning with discrete structure I am also motivated by applications to reasoning about data with discrete structure, such as integer programming or discrete reasoning tasks. Together with colleagues, we built the first artificial agent that plays the board game Go at a superhuman level, developed structured models of human-written source code, and designed relaxed gradient estimators for models with structured latent variables.
Inference and optimization Algorithms for Bayesian inference and optimization are the engines that drive machine learning. Although these two problems seem distinct, they have a lot of shared structure. I am interested in this interplay: we showed how to simulate from a probability distribution by optimizing a random function and how the use of a kinetic energy can condition optimization.
If you would like to study for a graduate degree with me, you should apply through the CS department or the Statistics department. If you would like to work with me as a postdoctoral researcher, I encourage you to apply through the Vector Institute.
Here are some of my recorded talks, which cover the spectrum from academic talks to wistful reflections.