“All men by nature desire to know.”
— Aristotle
I build interpretable AI systems capable of understanding and forecasting human behavior. This work bridges the gap between Machine Learning and the Behavioral Sciences, contributing to two core areas:
- AI Safety: Systems that operate on clear, interpretable principles are easier to align with human values and to trust in high-stakes settings.
- AI for Science: I develop methods to automate and scale behavioral science — from hypothesis generation to evaluation — to provide decision-makers with reliable evidence and foresight.
I am a Visiting Researcher at Stanford CS hosted by Sanmi Koyejo, and a PhD candidate in the Machine Learning Group at the University of Toronto, advised by Roger Grosse and Jimmy Ba. I also work with scientists at Forecasting Research Institute on hypothesis generation and automated scientific discovery.
My approach draws on the behavioral sciences to frame the right questions and on deep learning, optimization, statistics and information theory to answer them. You can find an overview of my past research here.
Community & Industry
I believe this mission is best pursued in community. To that end, I co-founded:
- for.ai: An independent AI research lab acquired by Cohere to become Cohere Labs.
- UTMIST: Now the largest AI student organization at the University of Toronto, creating opportunities for the next generation of AI talent.
For current undergraduates, I have compiled a list of U of T mentorship programs I strongly recommend: Read the guide here.
I always appreciate the opportunity to learn and grow, and I welcome your input.
- Anonymous Feedback: Leave a note here.
- Collaboration: Reach me at
sheldonh [at] cs [dot] stanford [dot] edu.