Nikita Dhawan


I am a PhD student at the University of Toronto and the Vector Institute, supervised by Professors Chris Maddison and Roger Grosse. I completed my Bachelors in Computer Science and Applied Math at UC Berkeley, where I enjoyed working with Professor Sergey Levine and Marvin Zhang.


Email  /  CV  /  LinkedIn  /  Google Scholar

profile photo
Research

I am interested in developing algorithms for reliable and trustworthy machine learning, with a particular focus on representation learning, self-supervision and robustness.

avid On the Difficulty of Defending Self-Supervised Learning against Model Extraction
Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, Nicolas Papernot
ICML, 2022
arXiv

Recently, ML-as-a-Service providers have commenced offering trained self-supervised models over inference APIs, which transform user inputs into useful representations for a fee. However, the high cost involved to train these models and their exposure over APIs both make black-box extraction a realistic security threat. We explore model stealing by constructing several novel attacks and evaluating existing classes of defenses.

avid ARM: A Meta-Learning Approach for Tackling Group Shift
Marvin Zhang*, Henrik Marklund*, Nikita Dhawan*, Abhishek Gupta, Sergey Levine, Chelsea Finn
NeurIPS, 2021
website / arXiv

Machine learning systems are regularly tested under distribution shift, in real-life applications. In this work, we consider the setting where the training data are structured into groups and test time shifts correspond to changes in the group distribution. We propose to use ideas from meta-learning to learn models that are adaptable, and introduce the framework of adaptive risk minimization (ARM), a formalization of this setting.

avid AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine,
RSS, 2020
website / arXiv / blog

Humans can learn from watching others, imagining how they would perform the task themselves, and then practicing on their own. Can robots do the same? We adopt a similar strategy of imagination and practice in this project to solve complex, long-horizon tasks, like operating a coffee machine or getting objects from within a closed drawer.

Teaching
dcs CSC 311: Introduction to Machine Learning Fall 2021 (University of Toronto)

EECS 126: Probability and Random Processes Fall 2020, Spring 2020 (UC Berkeley)

EECS 229A: Information Theory and Coding Fall 2020 (UC Berkeley)

Template