I am a Senior Research Scientist at NVIDIA within the Toronto AI Lab.

I completed my PhD under the supervision of Rich Zemel and Roger Grosse.

I have a broad set of interests, within deep learning and elsewhere. In the past, I have worked on improving optimization for deep neural networks and better understanding their loss landscape geometry. I have also worked on imposing functional constraints on neural networks, providing theoretical guarantees for learning with limited data, and investigating representation learning. Nowadays, my work focuses more on 3D generative modeling with applications to content creation for video games.

jlucas [at] cs [dot] toronto [dot] edu Take that, bots

I also enjoy:

  • Being a parent to two wonderful, tiny, noisy humans
  • Big fluffy dogs
  • Developing video games
  • Baking (especially bread)

Note: see google scholar for a complete list of my latest publications.

Highlighted publications


(ICLR 2024) - Graph metanetworks for processing diverse neural architectures Derek Lim, Haggai Maron, Marc Law, Jonathan Lorraine, James Lucas
Neural networks efficiently encode learned information within their parameters. What if we could treat neural networks themselves as input data? We design simple, provably expressive neural networks that can operate on other neural nets. These Graph Metanetworks are able to modify implicit neural representations and predict generalization over a wide range of neural network architectures.
(ICCV 2023) - ATT3D: Amortized Text-to-3D Object Synthesis Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas
Text-to-3D generative models have seen a significant boost to their quality. However, generating each 3D object often takes hours. In this work, we utilize amortized optimization techniques to reduce the generation time to tens of milliseconds for each object. See our follow-up work too!
(ICML 2021) - Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes James Lucas, Juhan Bae, Michael R. Zhang, Stanislav Fort, Richard Zemel, Roger Grosse
We analyze the Monotonic Linear Interpolation (MLI) property, wherein linearly interpolating from initialization to optimum leads to a monotonic decrease in the loss. Using tools from differential geometry, we provide sufficient conditions for MLI to hold and provide a thorough empirical investigation of the phenomena.
(ICML 2019) - Sorting out Lipschitz function approximation Cem Anil*, James Lucas*, Roger Grosse
Common activation functions are insufficient for norm-constrained (1-Lipschitz) network architectures. By using a gradient norm preserving activation, GroupSort, we prove universal approximation in this setting and achieve provable adversarial robustness with hinge loss.

Teaching

In the Fall of 2017 I taught CSC411/2515 - Introduction to Machine Learning.