Fartash Faghri

Reseach Scientist at Apple
< firstname > [at] apple [dot] com

PhD
Department of Computer Science
University of Toronto
Supervisor: David Fleet
< lastname > [at] cs.toronto [dot] edu

Google Scholar  /  Semantic Scholar  /  Twitter  /  Github  /  LinkedIn

profile photo
Research

I'm interested in machine learning, optimization, and computer vision. My recent research has been about efficient and robust training in deep learning.

Bridging the Gap Between Adversarial Robustness and Optimization Bias
Fartash Faghri, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux
arXiv, 2021
arXiv / code

Some standard models are maximally robust with no effort. Fourier-Linf attack is the one linear convolutional networks are maximally robust against.

Adaptive Gradient Quantization for Data-Parallel SGD
Fartash Faghri*, Iman Tabrizian*, Ilia Markov, Dan Alistarh, Daniel Roy, Ali Ramezani-Kebrya
NeurIPS, 2020
arXiv / code / video

Same accuracy as 32 bit gradients with only 3 quantization bits.

A Study of Gradient Variance in Deep Learning
Fartash Faghri, David Duvenaud, David J. Fleet, Jimmy Ba
NeurIPS workshop, 2019 on Beyond First Order Methods in ML. (Title: Gluster: Variance Reduced Mini-Batch SGD with Gradient Clustering)
arXiv / code

An efficient method for clustering gradients of training data. Observations on the variance of gradients during training for standard deep learning models.

SOAR: Second-Order Adversarial Regularization
Avery Ma, Fartash Faghri, Nicolas Papernot, Amir-massoud Farahmand
ArXiv, 2020
arXiv

A second-order adversarial regularizer based on the Taylor approximation of the inner-max in the robust optimization objective.

NUQSGD: Improved Communication Efficiency for Data-parallel SGD via Nonuniform Quantization
Ali Ramezani-Kebrya, Fartash Faghri, Daniel M. Roy
ArXiv, 2019
arXiv

An efficient gradient quantization method with convergence guaruantees.

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin Wattenberg, Ian Goodfellow
ICLR workshop, 2018
arXiv

A synthetic example for studying the relationship between high-dimensional geometry and adversarial examples.

VSE++: Improving Visual-Semantic Embeddings with Hard Negatives
Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, Sanja Fidler
BMVC, 2018 (Spotlight)
arXiv / code / video

A simple change to common loss functions used for multi-modal embeddings. That, combined with fine-tuning and use of augmented data, yields significant gains in retrieval performance.

Adversarial Manipulation of Deep Representations
Sara Sabour, Yanshuai Cao, Fartash Faghri, David J. Fleet
ICLR, 2016
arXiv / code

A feature adversary is a new type of adversarial image that its internal representation appears remarkably similar to a different image.

Service

Reviewer: ICLR (2021, 2020, 2019), ICML 2021, NeurIPS 2020, ECCV 2020, ICCV 2021, ICLR 2020 Workshop on Trustworthy ML, NeurIPS 2018 Workshop on Security in Machine Learning.


Website adapted from Jon Barron