Chun-Hao Chang (Kingsley)

Ph.D. in Computer Science

chkchang21 _at_ gmail.com


Student Researcher
Google Cloud
Host: Jinsung Yoon
Aug. 2021 - Jan 2022
Research Intern
Microsoft Research (MSR) Seattle
Host: Rich Caruana
Jun. 2019 - Aug 2019
Machine Learning Intern
Ads Ranking, Facebook UK
Host: Damien Lefortier
Jun. 2018 - Aug 2018
CS PhD
University of Toronto
Sep. 2016 - Now



I finished my PhD at Computer Science at University of Toronto with professor Anna Goldenberg. My main research works involve Interpretability , Robustness, and Applied RL in Health.


Research

[Interpretability, Healthcare] Extracting Clinician's Goals by What-if Interpretable Modeling
Although reinforcement learning (RL) has tremendous success in many fields, applying RL to real-world settings such as healthcare is challenging when the reward is hard to specify and no exploration is allowed. In this work, we focus on recovering clinicians' rewards in treating patients. We incorporate the what-if reasoning to explain clinician's actions based on future outcomes. We use generalized additive models (GAMs) - a class of accurate, interpretable models - to recover the reward. In both simulation and a real-world hospital dataset, we show our model outperforms baselines. Finally, our model's explanations match several clinical guidelines when treating patients while we found the previously-used linear model often contradicts them.
TLDR: We extract clinicians' treatment goals by interpretable GAM modeling and what-if reasoning
Chun-Hao Chang, George Alexandru Adam, Rich Caruana, Anna Goldenberg
Subbmitted to ICML 2022
[Interpretability] NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning
Although Generlized Additive Models (GAMs) are accurate and interpretable, current GAMs do not have differentiability and scalability. In this work we propose a deep learning version of GAM and GA2M that often outperform regular GAMs while remaining interpretable.
TLDR: We develop a deep-learning version of Generalized Additive Model (GAM) and GA2M that is both accurate and interpretable.
Chun-Hao Chang, Rich Caruana, Anna Goldenberg
Accepted in ICLR 2022 (Spotlight, 4.9% acceptance rate)
[Interpretability] How Interpretable and Trustworthy are GAMs?
Generalized additive models (GAMs) are useful for data bias discovery and model auditing. But do they always tell the true story of your data, or just its own hallucinated patterns? Also, which GAM algorithm is more accurate and less lying? In this paper we benchmark total 7 different GAMs variants and conclude that tree-based models are more trustworthy. We also design several metrics to decide which GAM is better.
TLDR: We compared total 7 different GAMs and showed which GAM is more trustworthy.
Chun-Hao Chang, Sarah Tan, Ben Lengerich, Anna Goldenberg, Rich Caruana
Accepted in 2021 KDD
[Robustness] Towards Robust Classification Model by Counterfactual and Invariant Data Generation
What makes an image be labeled as a cat? What makes a doctor think there is a tumor in a CT scan? These questions are inherently causal, but typical machine learning (ML) models rely on associations rather than causation. In this paper, we incorporated human causal knowledge into the ML models to make them robust, and show our models still have high accuracy when the environment changes. This is crucial for models to transfer across different environments e.g. different hospital sites in medical applications.
TLDR: We make our models robust to environment shifts by making them depend on the causal features more and spurious features less.
Chun-Hao Chang, George Alexandru Adam, Anna Goldenberg
Accepted in 2021 CVPR
[Robustness, Healthcare] Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation
TLDR: We characterize a feedback loop problem that clinicians changing their decisions based on an imperfect ML system that changes the future data distribution.
George Alexandru Adam, Chun-Hao Chang, Anna Goldenberg
Accepted in 2020 Machine Learning for Healthcare (MLHC)
[Interpretability] Explaining Image Classifiers by Counterfactual Generation
TLDR: We propose using generative models to ask counterfactual questions to interpret a black-box model (e.g. DNN).
Chun-Hao Chang, Elliot Creager, Anna Goldenberg, David Duvenaud
Accepted in 2019 ICLR
[Healthcare] Dynamic Measurement Scheduling for Adverse Event Forecasting using Deep RL
TLDR: We propose a reinforcement learning approach to help better allocate healthcare resouces for measurement scheduling.
Chun-Hao Chang*, Mingjie Mai*, Anna Goldenberg
Accepted in 2019 ICML