Ruiyu Wang
I hold an HBSc. degree with high distinction from the University of Toronto, where I completed a Computer Science Specialist with an NLP focus in June 2024. I am currently engaged in the research of Natural Language Processing. I am fortunate to be supervised by and work with Prof. Gerald Penn, Prof. Jimeng Sun, and Prof. Qiang Sun. I am currently a research assistant at Microsoft Research Asia, in which I work with Dr. Shizhao Sun at the Machine Learning Group.
My research interests focus on the neural mechanisms underlying language comprehension, acquisition, and production in the human brain, as well as the development of computational models to simulate these processes. I am particularly interested in understanding how the brain extracts meaning from language input, integrates it with prior knowledge, and generates appropriate responses. I aim to contribute to the development of natural language processing technologies in order to facilitate communication between people who speak different languages, and ultimately help to break down the barriers to global communication.
Addressing the current surge in Large Language Models, I build foundation models. Also, I am intrigued by their remarkable performance and seek to understand the mechanisms behind their success. This focus lies in unraveling the interpretability of LLMs and investigating methods for exerting control and making modifications without resorting to finetuning approaches.
(Last Updated: Dec 20, 2024)
Email /
Resume /
Scholar /
Github /
LinkedIn
|
|
Research
There are a lot of projects ongoing. I only post the projects I finished here.
|
|
Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models
Ruiyu Wang, Yu Yuan, Shizhao Sun, Jiang Bian
ArXiv Preprint, 2025, Website
Visual-rewarded Text2CAD generation foundation model.
|
|
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma
Ruiyu Wang, Sen Wang, Xinxin Zuo, Qiang Sun
ArXiv Preprint, 2025, Website
Improving incremental learning with task-specific adapters and regularization.
|
|
Revisiting GloVe, Word2Vec and BERT: On the Homogeneity of Word Vectors
Ruiyu Wang
Undergrad Capstone, 2024
A study on the homogeneity of word vectors: how they are transformed to each other.
|
|
Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch
Jinman Zhao, Xueyan Zhang, Xingyu Yue, Weizhe Chen, Zifan Qian, Ruiyu Wang
ArXiv, 2024
A study on the internal-external mismatch of word semantics understanding in language models.
|
|
Large Language Models on Lexical Semantic Change Detection: An Evaluation
Ruiyu Wang*, Matthew Choi*
ArXiv Preprint, 2023
An evaluation on low-source lexical semantic change (LSC) detection that involves the traditional models, BERT and LLMs.
|
|
UniPredict: Large Language Models are Universal Tabular Predictors
Ruiyu Wang*, Zifeng Wang*, Jimeng Sun
ArXiv Preprint, 2023
An LLM-based tabular prediction system that handles any inputs and any targets.
|
The style of this page was shamelessly ripped off from here. If you want to use this template, go visit the original website.
|
|