Visiting Researcher
Google Brain
Email: mren@cs.toronto.edu
Mengye Ren is a visiting researcher at Google Brain Toronto working with Prof. Geoffrey Hinton. Starting 2022 fall, he will be an assistant professor of computer science and data science at New York University. He received B.A.Sc. in Engineering Science (2015), and M.Sc. (2017) and Ph.D. (2021) in Computer Science from the University of Toronto, advised by Prof. Richard Zemel and Prof. Raquel Urtasun. From 2017 to 2021, he was also a senior research scientist at Uber Advanced Technologies Group (ATG) and Waabi. His research focuses on making machine learning more natural and human-like, in order for AIs to continually learn, adapt, and reason in naturalistic environments.
Areas: machine learning, computer vision, meta-learning, representation learning, few-shot learning, brain & cognitively inspired learning, robot learning, self-driving vehicles
My key research question is: how do we enable human-like, agent-based machine intelligence to continually learn, adapt, and reason in naturalistic environments? Towards this goal of building a more general and flexible AI, my research has centered on developing meta-learning and representation learning algorithms.
Some recent research highlights include:
Naturalistic paradigms for learning novel classes & attributes with very few examples, i.e. few-shot learning (FSL): semi-supervised FSL, incremental FSL, online contextualized FSL, attribute FSL
Meta-learning algorithms: contextual prototypical memory, learning regularization functions, learning to reweight examples, graph hypernetworks
Brain and cognitively inspired representation learning: learning to imitate drawing, self-supervised learning from video, local unsupervised learning, recurrent attention, divisive normalization
Prospective students: I am looking for motivated students with strong math or computer science background to join my future group at NYU. If you are interested, please send me an email with your CV, apply to the PhD program at Computer Science Department or Center for Data Science and mention my name in your application as your potential supervisor. You can only apply to one program at a time. The application deadline is Dec 12, 2021.
2021/11: I will visit the University of Oxford and give a talk on Nov 17, 2021.
2021/10: I will visit Stanford University and give a talk on Oct 20, 2021.
2021/10: I defended my Ph.D. thesis “Open World Machine Learning with Limited Labeled Data” on Oct 19, 2021.
2021/05: I will join as an assistant professor at NYU Courant Computer Science and Center for Data Science starting Sept 2022.
2021/05: One paper is accepted at ICML 2021.
2021/02: One paper is accepted at ICRA 2021.
2020/10: One paper is accepted at CoRL 2020.
2020/09: One paper is accepted at NeurIPS 2020.
2020/09: I will visit Stanford University and give a talk on Oct 12, 2020.
2020/09: I will visit Brown University and give a talk on Sept 25, 2020.
2020/08: I will visit MIT and give a talk on Sept 22, 2020.
2020/08: I will give a talk at Mila on Aug 28, 2020.
[Full List] [Google Scholar] [dblp]
Online unsupervised learning of visual representations and categories. Mengye Ren, Tyler R. Scott, Michael L. Iuzzolino, Michael C. Mozer, Richard Zemel. arXiv preprint 2109.05675, 2021. [arxiv]
Self-supervised representation learning from flow equivariance. Yuwen Xiong, Mengye Ren, Wenyuan Zeng, Raquel Urtasun. ICCV, 2021. [arxiv]
SketchEmbedNet: Learning novel concepts by imitating drawings. Alexander Wang*
, Mengye Ren*
, Richard Zemel. ICML, 2021. [arxiv]
Wandering within a world: Online contextualized few-shot learning. Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer, Richard Zemel. ICLR, 2021. [arxiv] [code] [video]
Few-shot attribute learning. Mengye Ren*
, Eleni Triantafillou*
, Kuan-Chieh Wang*
, James Lucas*
, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel. arXiv preprint 2012.05895, 2020. [arxiv] [video]
LoCo: Local contrastive representation learning. Yuwen Xiong, Mengye Ren, Raquel Urtasun. NeurIPS, 2020. [arxiv] [video]
Multi-agent routing value iteration networks. Quinlan Sykora*
, Mengye Ren*
, Raquel Urtasun. ICML, 2020. [arxiv] [code] [video]
Incremental few-shot learning with attention attractor networks. Mengye Ren, Renjie Liao, Ethan Fetaya, Richard S. Zemel. NeurIPS, 2019. [arxiv] [code]
Graph hypernetworks for neural architecture search. Chris Zhang, Mengye Ren, Raquel Urtasun. ICLR, 2019. [arxiv]
Learning to reweight examples for robust deep learning. Mengye Ren, Wenyuan Zeng, Bin Yang, Raquel Urtasun. ICML, 2018. [arxiv] [code] [video]
Meta-learning for semi-supervised few-shot classification. Mengye Ren, Eleni Triantafillou*
, Sachin Ravi*
, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, Richard S. Zemel. ICLR, 2018. [link] [arxiv] [code]
End-to-end instance segmentation with recurrent attention. Mengye Ren, Richard S. Zemel. CVPR, 2017. [link] [arxiv] [code] [video]
Exploring models and data for image question answering. Mengye Ren, Ryan Kiros, Richard S. Zemel. NIPS, 2015. [link] [arxiv] [results] [dataset] [code] [question generation]