I am a Ph.D. student in Computer Science at University of Toronto, supervised by Chris Maddison. Previously, I obtained my Bachelor degree in Information Engineering from Zhejiang University in July 2020. In summer 2019, I was a visiting student at UCLA, where I worked with Cho-Jui Hsieh.
I’m currently working on the intersection of machine learning and information theory, with a focus on representation learning, domain generalization, and neural compression. In particular, I want to understand how to learn useful and trustworthy representations from data, and motivate more efficient and principled representation learning methods. I’m also interested in implicit deep learning and generative modeling.
Selected Publications [Full List]
* below denotes equal contribution
- Augment with Care: Contrastive Learning for the Boolean Satisfiability ProblemIn International Conference on Machine Learning (ICML), 2022
- Optimal Representations for Covariate ShiftIn International Conference on Learning Representations (ICLR), 2022
- Improving Lossless Compression Rates via Monte Carlo Bits-Back CodingIn International Conference on Machine Learning (ICML), 2021 [Long talk]
- Conference reviewer: NeurIPS (2020-), ICLR (2021-), ICML (2021-)
- Workshop reviewer: NeurIPS Workshop on DGMs Applications (2021), ICML Workshop on Pretraining (2022)
Selected Awards & Honors
- DiDi Gruduate Student Award, 2021
- CHU Kochen Scholarship (highest honor at Zhejiang University), 2019.
- Cross-disciplinary Scholars in Science and Technology (CSST), UCLA, 2019.
- National Scholarship (top 1.5%), 2017, 2018, 2019.
- Meritorious Winner, Interdisciplinary Contest in Modeling (ICM), 2018.