Picture of me

Geoffrey Roeder



I am a graduate researcher at the Vector Institute for Artificial Intelligence, affiliated with the Machine Learning group at the University of Toronto. I work with Prof. David Duvenaud on algorithms that improve the speed and model flexibility for both deep learning and Bayesian machine learning. More broadly, I am interested in machine learning algorithms that automatically discover underlying patterns in data and use them to generate new structured content.

I completed my BSc (2016) at the University of British Columbia majoring in both Statistics and Computer Science. I spent summer 2016 working in Prof. Mark Schmidt's Machine Learning Lab where I developed unsupervised learning algorithms for a Matlab machine learning toolbox. I spent fall 2017 working with Ferenc Huszár on improving black-box optimization methods for general non-differentiable functions. This term, I'm part of the teaching staff for CSC412: Probabilistic Learning and Reasoning. This summer, I will be working with Microsoft Research Cambridge on improving machine learning algorithms to solve problems in synthetic biology.

Research

Curriculum Vitae

Email: roeder@cs.toronto.edu



Research





Design manifold

Design Motifs for Probabilistic Generative Design

Generative models can be used to produce designs that obey hard-to-specify constraints while still producing plausible examples. Recent examples of this include drug design, text with desired sentiment, or images with desired captions. However, most previous applications of generative models to design are based on bespoke, ad-hoc procedures. We give a unifying treatment of generative design based on probabilistic generative models. Some of these models can be trained end-to-end, can take advantage of both labelled and unlabelled examples, and automatically trade off between different design goals.

Under review at ICLR 2018 .




Surface plot depicting problem

Backpropagation through the Void: Optimizing Control Variates for Black-Box Gradient Estimation

Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables. Our method uses gradients of a neural network trained jointly with model parameters or policies, and is applicable in both discrete and continuous settings. We demonstrate this framework for training discrete latent-variable models. We also give an unbiased, action-conditional extension of the advantage actor-critic reinforcement learning algorithm.

Accepted as a contributed talk at the Deep Reinforcement Learning Symposium, NIPS 2017.

I gave a talk on the paper at the University of Cambridge in November, 2017

Accepted for publication at ICLR 2018




Surface plot depicting problem

Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference

We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches zero as the approximate posterior approaches the exact posterior. We analyze the behavior of this gradient estimator theoretically and empirically, and generalize it to more complex variational distributions such as mixtures and importance-weighted posteriors.

A short version of the paper was published at NIPS 2016's Advances in Approximate Bayesian Inference workshop

The full length version of the paper was published at NIPS 2017

Andrew Miller wrote a great blog post exploring the key ideas of the paper.




Manifold to learn with t-SNE

MatLearn: Machine Learning Algorithm Implementations in Matlab

Link to website

I merged multiple code bases from many graduate student contributors into a finished software package, and added a variety of new unsupervised learning algorithms including sparse autoencoders, Hidden Markov Models, Linear-Gaussian State Space Models, t-Distributed Stochastic Neighbour Embedding, and Convolutional Neural Networks for image classification.

Download package