publications

2024

  1. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
    Treutlein, Johannes, Choi, Dami, Betley, Jan, Anil, Cem, Marks, Samuel, Grosse, Roger Baker, and Evans, Owain
    arXiv preprint arXiv:2406.14546 2024
  2. LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language
    Requeima, James, Bronskill, John, Choi, Dami, Turner, Richard E, and Duvenaud, David
    arXiv preprint arXiv:2405.12856 2024

2023

  1. arXiv
    Tools for Verifying Neural Models’ Training Data
    Choi, Dami, Shavit, Yonadav, and Duvenaud, David
    arXiv preprint arXiv:2307.00682 2023

2020

  1. NeurIPS
    Gradient Estimation with Stochastic Softmax Tricks
    Paulus, Max, Choi, Dami, Tarlow, Daniel, Krause, Andreas, and Maddison, Chris J
    In Advances in Neural Information Processing Systems 2020
  2. ICBINB
    Self-Tuning Stochastic Optimization with Curvature-Aware Gradient Filtering
    Chen, Ricky T. Q., Choi, Dami, Balles, Lukas, Duvenaud, David, and Hennig, Philipp
    Workshop on "I Can’t Believe It’s Not Better!", NeurIPS 2020

2019

  1. arXiv
    On empirical comparisons of optimizers for deep learning
    Choi, Dami, Shallue, Christopher J, Nado, Zachary, Lee, Jaehoon, Maddison, Chris J, and Dahl, George E
    arXiv preprint arXiv:1910.05446 2019
  2. arXiv
    Faster neural network training with data echoing
    Choi, Dami, Passos, Alexandre, Shallue, Christopher J, and Dahl, George E
    arXiv preprint arXiv:1907.05550 2019
  3. ICML
    Guided evolutionary strategies: Augmenting random search with surrogate gradients
    Maheswaranathan, Niru, Metz, Luke, Tucker, George, Choi, Dami, and Sohl-Dickstein, Jascha
    In International Conference on Machine Learning 2019

2018

  1. ICLR
    Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
    Grathwohl, Will, Choi, Dami, Wu, Yuhuai, Roeder, Geoff, and Duvenaud, David
    In International Conference on Learning Representations 2018