The main goal of my research is to get machine learning methods to work well with considerably less labeled data.
To accomplish this goal I am interested in semi-supervised learning methods as well as incorporating (mainly discrete) structure into inference models.
As of Febuary 2019, I also work part-time at Google Brain in Toronto.
My last paper FFJORD was just selected for an oral presentation at ICLR 2019! (top 1.5% of submissions)
Invertible Residual Networks: Jens Behrmann*, Will Grathwohl* Ricky T. Q. Chen, David Duvenaud, Jorn-Henrik Jocobsen* (*equal contribution)
ICML 2019. Long Oral Presentation.
We make ResNets inveritible without dimension splitting heuristics. We demonstrate that these models can be used in building state-of-the-art generative and discriminitive models.
Modeling Global Class Structure Leads to Rapid Integration of New Classes: Will Grathwohl, Eleni Triantafillou, Xuechen Li, David Duvenaud and Richard Zemel.
NIPS 2018 Workshop on Meta-Learning
NIPS 2018 Workshop on Continual Learning
Training Glow with Constant Memory Cost: Xuechen Li, Will Grathwohl.
NIPS 2018 Workshop on Bayesian Deep Learning
Gradient-Based Optimization of Neural Network Architecture: Will Grathwohl, Elliot Creager, Kamyar Ghasemipour, Richard Zemel.
ICLR 2018 Workshop.
Backpropagation through the Void: Optimizing control variates for black-box gradient estimation: Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, David Duvenaud.
NIPS 2017 Deep Reinforcement Learning Symposium. Oral Presentation. Video of my talk found here.
Awards and Fellowships
Borealis AI Graduate Fellowship: A $50,000, 2 year fellowship funding research in AI. Funded by the Royal Bank of Canada. Huawei Prize: A financial award based on academic and research performance. ICLR 2018 Travel Award Best Paper Award: Symposium on Advances in Approximate Bayesian Inference 2018