Will Grathwohl

Email: wgrathwohl[ at ]cs[ dot ]toronto[ dot ]edu
Github: wgrathwohl
Linkedin:
will-grathwohl
CV
A photo of me
Twitter:
wgrathwohl. Follow me for constitent, quality raccoon content and occasional stuff about machine learning.

Drawing inspiration from many of my great math professors from undergrad, this website will be poorly made, and have lots improperly formatted HTML.

I completed my undergraduate degree in Mathematics at MIT in 2014. I am now a PhD student in the Machine Learning Group here at the University of Toronto.

I am co-supervised by Richard Zemel and David Duvenaud.

The main goal of my research is to get machine learning methods to work well with considerably less labeled data.
To accomplish this goal I am interested in semi-supervised learning methods as well as incorporating (mainly discrete) structure into inference models.

As of Febuary 2019, I also work part-time at Google Brain in Toronto.

My last paper FFJORD was just selected for an oral presentation at ICLR 2019! (top 1.5% of submissions)

Papers

Conference

Invertible Residual Networks: Jens Behrmann*, Will Grathwohl* Ricky T. Q. Chen, David Duvenaud, Jorn-Henrik Jocobsen* (*equal contribution)
ICML 2019. Long Oral Presentation.
We make ResNets inveritible without dimension splitting heuristics. We demonstrate that these models can be used in building state-of-the-art generative and discriminitive models.

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models: Will Grathwohl*, Ricky T. Q. Chen*, Jesse Bettencourt, Ilya Sutskever, David Duvenaud. (*equal contribution)
ICLR 2019. Oral Presentation.
We utilize the recently proposed Neural ODEs to construct the state-of-the-art flow-based generative model!

Backpropagation through the Void: Optimizing control variates for black-box gradient estimation: Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, David Duvenaud.
ICLR 2018.
We present a general method for estimating the gradients of expectation of functions of random variables.
Our method can be applied to distributions of discrete random variables or even when the function being optimized is not known!

Workshops

FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models: Will Grathwohl*, Ricky T. Q. Chen*, Jesse Bettencourt, Ilya Sutskever, David Duvenaud. (*equal contribution)
Symposium on Advances in Approximate Bayesian Inference 2018. Oral Presentation, Best Paper Award.

Modeling Global Class Structure Leads to Rapid Integration of New Classes: Will Grathwohl, Eleni Triantafillou, Xuechen Li, David Duvenaud and Richard Zemel.
NIPS 2018 Workshop on Meta-Learning
NIPS 2018 Workshop on Continual Learning

Training Glow with Constant Memory Cost: Xuechen Li, Will Grathwohl.
NIPS 2018 Workshop on Bayesian Deep Learning

Gradient-Based Optimization of Neural Network Architecture: Will Grathwohl, Elliot Creager, Kamyar Ghasemipour, Richard Zemel.
ICLR 2018 Workshop.

Backpropagation through the Void: Optimizing control variates for black-box gradient estimation: Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, David Duvenaud.
NIPS 2017 Deep Reinforcement Learning Symposium. Oral Presentation. Video of my talk found here.

Awards and Fellowships

Borealis AI Graduate Fellowship: A $50,000, 2 year fellowship funding research in AI. Funded by the Royal Bank of Canada.
Huawei Prize: A financial award based on academic and research performance.
ICLR 2018 Travel Award
Best Paper Award: Symposium on Advances in Approximate Bayesian Inference 2018