I am a Senior Research Scientist at Google in the N2Formal team led by Christian Szegedy.
I am a Postdoctoral Scholar at Stanford, mentored by Percy Liang and Jay McClelland.
During my PhD at U of Toronto, I was advised by Roger Grosse and Jimmy Ba.
You can find my CV here (Last updated July 4, 2022).
My primary research interest is building machines that can reason.
I have chosen mathematics as a starting point to study reasoning, with the aim of creating an automated mathematician.
I am interested in improving neural architectures for reasoning, as well as building human-like reasoning mechanisms into the model.
Albert Jiang (PhD student at Cambridge)
Cem Anil (PhD student at UofT)
Eric Zelikman (PhD student at Stanford)
Felix Li (Undergraduate student at UC Berkeley)
Jin Zhou (PhD student at Cornell)
Qian Huang (PhD student at Stanford)
Szymon Tworkowski (Master student at Univ. of Warsaw)
Maciej Mikuła (Master student at Univ. of Warsaw)
Ethan Chi (Master student at Stanford)
Honghua Dong (PhD student at UofT)
Imanol Schlag (PhD student at IDSIA)
Qiyang (Colin) Li (PhD student at UC Berkeley)
Releasing Draft, Sketch, and Prove: Autoformalize the entire natural language proofs [arxiv]!
I gave a talk on autoformalization at FLAIM conference.
I gave a guest lecture on autoformalization at UIUC proof automation class.
8 papers accepted to NeurIPS 2022.
Our length generalization paper is accepted as an Oral Presentation at NeurIPS 2022.
We are organizing the second MATHAI workshop at NeurIPS 2022.
I gave a talk at AITP 2022.
Sharing a systematic study on synthetic pre-training [arXiv]. Understanding pre-training via synthetic tasks!
I gave a talk at the University of Cambridge [Link].
I gave talk at UC Berkeley Center for Human-Compatible AI (CHAI).
I gave a talk at Covariant.ai.
We released Thor [arXiv]. Integrate symbolic tools to neural theorem provers for premise selection!
We released STaR [arXiv]. Bootstrapping Reasoning with Reasoning!
We released Block-Recurrent Transformer [arXiv]. Recurrence is coming back!
Gave a talk at the University of Oxford.
Gave a talk at Harvard University.
Memorizing Transformers accepted as a spotlight presentation at ICLR 2022.
Three papers accepted to ICLR 2022.
Subgoal search algorithm accepted to NeurIPS 2021.
Co-organized the MATHAI4ED workshop at NeurIPS 2021: Math AI for education: Bridging the gap between research and smart education.
Led the Reasoning section in the Foundation Model white paper.
Two posters in ICML 2021.
Two posters in ICLR 2021.
Minerva: Solving Quantitative Reasoning Problems with Language Models
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra
NeurIPS, 2022.PDF Google AI Blog
Exploring Length Generalization in Large Language Models
Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra,
Vinay Ramasesh, Ambrose Slone, Guy Gur-Ari, Ethan Dyer, Behnam Neyshabur
The 36th Conference on Neural Information Processing Systems, 2022.PDF
STaR: Bootstrapping Reasoning With Reasoning
Eric Zelikman*, Yuhuai Wu*, Noah D. Goodman
Subgoal Search For Complex Reasoning Tasks.
Konrad Czechowski, Tomasz Odrzygozdz, Marek Zbysinski, Michal Zawalski,
Krzysztof Olejnik,Yuhuai Wu, Lukasz Kucinski, Piotr Milos
The 35th Conference on Neural Information Processing Systems, 2021.PDF
Modelling High-Level Mathematical Reasoning in Mechanised Declarative Proofs.
Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C. Paulson
The 9th International Conference on Learning Representations, 2021.PDF
On the Opportunities and Risks of Foundation Models.
Rishi Bommasani, Drew A. Hudson, Percy Liang et. al.
Options as REsponses: Grounding Behavioural Hierarchies in Multi-agent Reinforcement Learning.
Yuhuai Wu*, Alexander Sasha Vezhnevets*, Maria Eckstein, Remi Leblond, Joel Z. Leibo.
The 37th International Conference on Machine Learning, 2020.PDF
Grandmaster Level in StarCraft II using Multi-gent Reinforcement Learning.
Vinyals, O., Babuschkin, I., Czarnecki, W.M. et al.