I am a graduate researcher at the Vector Institute for Artificial Intelligence, affiliated with the Machine Learning group at the University of Toronto.
I work with Prof. David Duvenaud on algorithms that
improve the speed and model flexibility for both deep learning and Bayesian machine learning.
More broadly, I am interested in machine learning algorithms that automatically discover underlying patterns in data and use them to generate new structured content.

I completed my BSc (2016) at the University of British Columbia majoring in both Statistics and Computer Science. I spent summer 2016 working in Prof. Mark Schmidt's Machine Learning Lab where I developed unsupervised learning algorithms for a Matlab machine learning toolbox. I spent fall 2017 working with Ferenc HuszĂˇr on improving black-box optimization methods for general non-differentiable functions. This term, I'm part of the teaching staff for CSC412: Probabilistic Learning and Reasoning. This summer, I will be working with Microsoft Research Cambridge on improving machine learning algorithms to solve problems in synthetic biology.

Research

Curriculum Vitae

Email: roeder@cs.toronto.edu

I completed my BSc (2016) at the University of British Columbia majoring in both Statistics and Computer Science. I spent summer 2016 working in Prof. Mark Schmidt's Machine Learning Lab where I developed unsupervised learning algorithms for a Matlab machine learning toolbox. I spent fall 2017 working with Ferenc HuszĂˇr on improving black-box optimization methods for general non-differentiable functions. This term, I'm part of the teaching staff for CSC412: Probabilistic Learning and Reasoning. This summer, I will be working with Microsoft Research Cambridge on improving machine learning algorithms to solve problems in synthetic biology.

Research

Curriculum Vitae

Email: roeder@cs.toronto.edu

Under review at ICLR 2018 .

Accepted as a contributed talk at the Deep Reinforcement Learning Symposium, NIPS 2017.

I gave a talk on the paper at the University of Cambridge in November, 2017

Accepted for publication at ICLR 2018

A short version of the paper was published at NIPS 2016's Advances in Approximate Bayesian Inference workshop

The full length version of the paper was published at NIPS 2017

Andrew Miller wrote a great blog post exploring the key ideas of the paper.

I merged multiple code bases from many graduate student contributors into a finished software package, and added a variety of new unsupervised learning algorithms including sparse autoencoders, Hidden Markov Models, Linear-Gaussian State Space Models, t-Distributed Stochastic Neighbour Embedding, and Convolutional Neural Networks for image classification.

Download package