Ruslan Salakhutdinov

Assistant Professor
Microsoft Faculty Fellow
Sloan Fellow
University of Toronto
rsalakhu[at]cs.toronto.edu
CV Google Scholar  

I am an assistant professor of Computer Science and Statistics at the University of Toronto. I work in the field of statistical machine learning (See my CV.)

I received my PhD in computer science from the University of Toronto in 2009. After spending two post-doctoral years at MIT, I joined the University of Toronto in 2011.

My research interests include Deep Learning, Probabilistic Graphical Models, and Large-scale Optimization.

Prospective students: Please read this to ensure that I read your email.

Recent Research Highlights:


Recent Papers:

  • Learning Deep Generative Models
    Ruslan Salakhutdinov
    Annual Review of Statistics and Its Application, Vol. 2, pp. 361–385, 2015
    [pdf], 2015

  • Unsupervised Learning of Video Representations using LSTMs
    Nitish Srivastava, Elman Mansimov, Ruslan Salakhutdinov
    To appear in ICML, 2015, [arXiv], 2015

  • Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
    Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio
    To appear in ICML, 2015, [arXiv], 2015

  • Exploiting Image-trained CNN Architectures for Unconstrained Video Classification
    Shengxin Zha, Florian Luisier, Walter Andrews, Nitish Srivastava, Ruslan Salakhutdinov
    [arXiv], 2015

  • segDeepM: Exploiting Segmentation and Context in Deep Neural Networks for Object Detection
    Y. Zhu, R. Urtasun, R. Salakhutdinov and S.Fidler
    In Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, June 2015,
    [ arXiv ]

  • Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models
    Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel.
    To appear in Transactions of the Association for Computational Linguistics (TACL), 2015.
    [ arXiv], [ results], [ demo ].
    An encoder-decoder architecture for ranking and generating image descriptions.
    Previous version appeared in NIPS Deep Learning Workshop, 2014.

  • Accurate and Conservative Estimates of MRF Log-likelihood using Reverse Annealing
    Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov,
    To appear in AI and Statistics, 2015 [arXiv]

  • Learning Generative Models with Visual Attention
    Yichuan Tang, Nitish Srivastava, and Ruslan Salakhutdinov
    Neural Information Processing Systems (NIPS 28), 2014, oral,
    [ pdf ], Supplementary material [ pdf].

  • A Multiplicative Model for Learning Distributed Text-Based Attribute Representations
    Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov.
    Neural Information Processing Systems (NIPS 28), 2014.
    [ pdf ], Supplementary material [ zip].

  • Multimodal Learning with Deep Boltzmann Machines
    Nitish Srivastava and Ruslan Salakhutdinov
    Journal of Machine Learning Research, 2014. [ pdf ]. Code is available [ here].

  • Dropout: A simple way to prevent neural networks from overfitting
    Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov
    Journal of Machine Learning Research, 2014. [ pdf].

  • Deep Learning for Neuroimaging: a Validation Study
    S. Plis, D. Hjelm, R. Salakhutdinov, E. Allen, H. Bockholt, J. Long, H. Johnson, J. Paulsen, J. Turner, and V. Calhoun
    Frontiers in Neuroscience, 2014. [ pdf].

  • Multi-task Neural Networks for QSAR Prediction
    George E. Dahl, Navdeep Jaitly, Ruslan Salakhutdinov, 2014.
    [ arXiv].

  • Restricted Boltzmann Machines for Neuroimaging: An Application in Identifying Intrinsic Networks
    Devon Hjelma, Vince Calhouna, Ruslan Salakhutdinov, Elena Allena, Tulay Adali, and Sergey Plisa
    In NeuroImage, Volume 96, Aug 1 2014, pages 245 - 260. [ pdf].

  • Multimodal Neural Language Models
    Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel.
    In 31th International Conference on Machine Learning (ICML 2014)
    [pdf], [ Project Page].