Younwoo (Ethan) Choi

Younwoo (Ethan) Choi

ywchoi [at] cs [dot] toronto [dot] edu

I am a Master's student in Computer Science at the University of Toronto and the Vector Institute, where I have the privilege of working under the supervision of Professor. Rahul G. Krishnan. I received my Honours Bachelor of Science (HBSc) in Computer Science from the same university.

My research focuses on Large Language Models (LLMs). I am particularly interested in developing and improving methodologies in fine-tuning, Reinforcement Learning from Human Feedback (RLHF), and inference. I am also interested in mechanistic interpretability, aiming to understand and analyze the inner workings of transformers.

Prior to my current studies, I was a Research Intern at Noah's Ark Lab, Huawei Canada, where I worked on diffusion models.

Papers

Teaching LLMs How to Learn with Contextual Fine-Tuning

We ask, "can prompting help us teach LLMs how to learn". In this work, we study a novel generalization of instruction tuning, called contextual fine-tuning, to fine-tune LLMs. Our method leverages instructional prompts designed to mimic human cognitive strategies in learning and problem-solving to guide the learning process during training, aiming to improve the model’s interpretation and understanding of domain-specific knowledge.

Younwoo Choi*, Muhammad Adil Asif*, Ziwen Han, John Willes, Rahul G. Krishnan

International Conference on Learning Representations (ICLR) 2025

pdf | website | data

Personalized Adaptation via In-Context Preference Learning

We present the Preference Pretrained Transformer (PPT), a novel approach for adaptive personalization using online user feedback. PPT leverages the in-context learning capabilities of transformers to dynamically adapt to individual preferences. Our approach consists of two phases: (1) an offline phase where we train a single policy model using a history-dependent loss function, and (2) an online phase where the model adapts to user preferences through in-context learning.

Allison Lau, Younwoo (Ethan) Choi*, Vahid Balazadeh*, Keertana Chidambaram*, Vasilis Syrgkanis, Rahul G. Krishnan

NeurIPS 2024 Workshop on Adaptive Foundation Models

pdf | poster

DICE: Diverse Diffusion Model with Scoring for Trajectory Prediction

We present a novel framework that leverages diffusion models for predicting future trajectories in a computationally efficient manner. We show the effectiveness of our approach by conducting empirical evaluations on common pedestrian (UCY/ETH) and autonomous driving (nuScenes) benchmark datasets on which our model achieves state-of-the-art performance on several subsets and metrics.

Younwoo Choi, Ray Coden Mercurius, Soheil Mohamad Alizadeh Shabestary, Amir Rasouli

IEEE Intelligent Vehicles 2024

pdf

Teaching


Teaching Assistant