I am a PhD student at Princeton working on statistical machine learning
advised by Ryan Adams,
in the Laboratory for Intelligent Probabilistic Systems.
In 2018, I completed my MSc with David Duvenaud at the
Vector Institute for Artificial Intelligence, while a student in Machine Learning group at the University of Toronto.
I completed my BSc (2016) at the University of British Columbia in Statistics and Computer Science advised by Mark Schmidt

I spent fall 2017 working with Ferenc Huszár on improving black-box optimization methods for general non-differentiable functions. During summer 2018, while an intern at Microsoft Research Cambridge, I collaborated on a novel class of deep generative models for understanding and programming information processing in biological systems. During summer 2019, while an intern at Google Brain, I collaborated with Durk Kingma on identifiable representation learning by deep discriminative models. In fall of 2019, I joined*X, the Moonshot Factory * (formerly Google X) as a Resident in core ML.
Since February 2020, I have been a 20%-time Resident with the Quantum group at X, working on Bayesian inference for noisy intermediate-scale
quantum algorithms.

Broadly, I aim to help push forward a theoretical understanding of deep learning in support of robustness, reliability, and efficient inference, with an overarching goal of improving scientific discovery and engineering design through leveraging new affordances in deep generative models.

Research

Email: roeder@princeton.edu

I spent fall 2017 working with Ferenc Huszár on improving black-box optimization methods for general non-differentiable functions. During summer 2018, while an intern at Microsoft Research Cambridge, I collaborated on a novel class of deep generative models for understanding and programming information processing in biological systems. During summer 2019, while an intern at Google Brain, I collaborated with Durk Kingma on identifiable representation learning by deep discriminative models. In fall of 2019, I joined

Broadly, I aim to help push forward a theoretical understanding of deep learning in support of robustness, reliability, and efficient inference, with an overarching goal of improving scientific discovery and engineering design through leveraging new affordances in deep generative models.

Research

Email: roeder@princeton.edu

I presented an early version of this work at the Conference on the Mathematical Theory of Deep Neural Networks (DeepMath) 2019.

I gave a a talk at Yale on this work in March 2020.

arXiv link: https://arxiv.org/abs/2007.00810

In submission.

In submission.

arXiv link: https://arxiv.org/abs/2005.06549

Accepted for publication and short oral at ICML 2019: arXiv link; poster link

Blog post: Efficient Inference for Dynamical Models using Variational Autoencoders

Submitted to ICLR 2018 workshop track.

Accepted as a contributed talk at the Deep Reinforcement Learning Symposium, NIPS 2017.

I gave a talk on the paper at the University of Cambridge in November, 2017

Accepted for publication at ICLR 2018

A short version of the paper was published at NIPS 2016's Advances in Approximate Bayesian Inference workshop

The full length version of the paper was published at NIPS 2017

Andrew Miller wrote a great blog post exploring the key ideas of the paper.

I merged multiple code bases from many graduate student contributors into a finished software package, and added a variety of new unsupervised learning algorithms including sparse autoencoders, Hidden Markov Models, Linear-Gaussian State Space Models, t-Distributed Stochastic Neighbour Embedding, and Convolutional Neural Networks for image classification.

Download package