My goal is to understand and improve the algorithms that agents can use to learn from data and reason about their experience. Learning can be formalized in the language of statistics. Because statistics usually involves solving difficult problems, like probabilistic inference or optimization, most learning systems rely on algorithms for these problems.
Of these, I am particularly interested in algorithms for (approximate) Bayesian inference, Monte Carlo estimation, and continuous and discrete optimization. Although these problems seem distinct, they have a lot of shared structure. My work often touches on this theme, like when we showed how to simulate from a probability distribution by optimizing a random function.
I am also interested in learning with structured data. Together with colleagues, we built the first artificial agent that plays the board game Go at a superhuman level, developed structured models of human-written source code, improved the training of latent variable models of time series, and designed relaxed gradient estimators for models with structured latent variables.
If you would like to study for a graduate degree with me, you should apply through the CS department or the Statistics department. If you would like to work with me as a postdoctoral researcher, I encourage you to apply through the Vector Institute.
Here are some of my recorded talks, which cover the spectrum from academic talks to wistful reflections.