1. arXiv
    Tools for Verifying Neural Models’ Training Data
    Choi, Dami, Shavit, Yonadav, and Duvenaud, David
    arXiv preprint arXiv:2307.00682 2023


  1. NeurIPS
    Gradient Estimation with Stochastic Softmax Tricks
    Paulus, Max, Choi, Dami, Tarlow, Daniel, Krause, Andreas, and Maddison, Chris J
    In Advances in Neural Information Processing Systems 2020
    Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
    Chen, Ricky T. Q., Choi, Dami, Balles, Lukas, Duvenaud, David, and Hennig, Philipp
    Workshop on "I Can’t Believe It’s Not Better!", NeurIPS 2020


  1. arXiv
    On empirical comparisons of optimizers for deep learning
    Choi, Dami, Shallue, Christopher J, Nado, Zachary, Lee, Jaehoon, Maddison, Chris J, and Dahl, George E
    arXiv preprint arXiv:1910.05446 2019
  2. arXiv
    Faster neural network training with data echoing
    Choi, Dami, Passos, Alexandre, Shallue, Christopher J, and Dahl, George E
    arXiv preprint arXiv:1907.05550 2019
  3. ICML
    Guided evolutionary strategies: Augmenting random search with surrogate gradients
    Maheswaranathan, Niru, Metz, Luke, Tucker, George, Choi, Dami, and Sohl-Dickstein, Jascha
    In International Conference on Machine Learning 2019


  1. ICLR
    Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
    Grathwohl, Will, Choi, Dami, Wu, Yuhuai, Roeder, Geoff, and Duvenaud, David
    In International Conference on Learning Representations 2018