Ruiyu Wang

I am a fourth-year undergrad at the University of Toronto. I am currently engaged in the research of Natural Language Processing. I am fortunate to be supervised by and work with Prof. Gerald Penn, Prof. Jimeng Sun, and Prof. Qiang Sun.

My research interests focus on the neural mechanisms underlying language comprehension, acquisition, and production in the human brain, as well as the development of computational models to simulate these processes. I am particularly interested in understanding how the brain extracts meaning from language input, integrates it with prior knowledge, and generates appropriate responses. I aim to contribute to the development of natural language processing technologies in order to facilitate communication between people who speak different languages, and ultimately help to break down the barriers to global communication.

Addressing the current surge in Large Language Models, I am intrigued by their remarkable performance and seek to understand the mechanisms behind their success. My focus lies in unraveling the interpretability of LLMs and investigating methods for exerting control and making modifications without resorting to finetuning approaches.

(Mar 19 2024 update) I will be joining the Machine Learning Group, Microsoft Research Lab - Asia this summer. See you soon in Beijing!

Email  /  Resume  /  Scholar  /  Github  /  LinkedIn

profile photo

Research

There are a lot of projects ongoing. I only post the project I finished here.

Large Language Models on Lexical Semantic Change Detection: An Evaluation
Ruiyu Wang*, Matthew Choi*
arXiv, 2023

An evaluation on low-source lexical semantic change (LSC) detection that involves the traditional models, BERT and LLMs.

UniPredict: Large Language Models are Universal Tabular Predictors
Ruiyu Wang, Zifeng Wang*, Jimeng Sun
arXiv, 2023

An LLM-based tabular prediction system that handles any inputs and any targets.


This website steals the source code from here.