“All men by nature desire to know.”

— Aristotle

My goal is to understand intelligence and human nature to help shape a more flourishing future. I do this by building interpretable AI systems capable of understanding and forecasting human behavior. This research contributes to AI Safety and AI for Science:

  1. AI Safety: By building systems that operate on a set of clear, interpretable principles, we can create AI that is more aligned with human values and safer to deploy in high-stakes settings.
  2. AI for Science: These same systems can be used to automate and scale the scientific study of human behavior itself, generating reliable foresights that equip decision-makers to make better judgments.

I am pursuing this work as a PhD student in the Machine Learning Group at the University of Toronto, where I am fortunate to be advised by Roger Grosse and Jimmy Ba. I also work closely with the Forecasting Research Institute, collaborating with Philip Tetlock, Ezra Karger and Chris Karvetski on AI judgemental forecasting and hypothesis generation.

My research is built on two pillars of extensive training: the behavioral sciences, which help me frame the core questions, and a solid technical foundation in deep learning, optimization, statistics, and information theory, which provides the tools to answer them. You can find an overview of my past research here.

I believe this mission is best pursued in community. To that end, I co-founded UTMIST and for.ai (acquired by Cohere to become Cohere Labs) to create opportunities for the next generation of AI talent. For current undergraduates, I’ve compiled a list of U of T mentorship programs I have previously mentored in and would highly recommend this list.

I always appreciate the opportunity to learn and grow, and I welcome your input.

  • You can leave anonymous feedback here.
  • If you’re interested in collaborating or discussing my work, please reach me at: huang [at] cs [dot] toronto [dot] edu.