Geoffrey E. Hinton
I am not working at present because my wife is very ill.
I may be very slow at responding to emails or phone calls.
Department of Computer Science|
||email: geoffrey [dot] hinton [at]
gmail [dot] com
|University of Toronto
||voice: send email
|6 King's College Rd.
||fax: scan and send email
Information for prospective students:
I advise interns at Brain team Toronto.
I also advise some of the
residents in the
Google Brain Residents
I will not be taking any more visiting students,
summer students or visitors at the University of Toronto. I will not be the sole advisor of any new
graduate students, but I may co-advise a few graduate students
with Prof. Roger Grosse or soon to be Prof. Jimmy Ba.
Results of the 2012 competition to recognize 1000 different types of object
How George Dahl won the competition to predict the activity of potential drugs
How Vlad Mnih won the competition to predict job salaries from job advertisements
How Laurens van der Maaten won the competition to visualize a dataset of potential drugs
Using big data to make people vote against their own interests
A possible motive for making people vote against their own interests
Basic papers on deep learning
Hinton, G. E., Osindero, S. and Teh, Y. (2006)
A fast learning algorithm for deep belief nets.
Neural Computation, 18, pp 1527-1554.
Movies of the neural network generating and recognizing digits
Hinton, G. E. and Salakhutdinov, R. R. (2006)
Reducing the dimensionality of data with neural networks.
Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
full paper ]
supporting online material (pdf) ]
Matlab code ]
LeCun, Y., Bengio, Y. and Hinton, G. E. (2015)
Nature, Vol. 521, pp 436-444.
Papers on deep learning without much math
Hinton, G. E. (2007)
To recognize shapes, first learn to generate images
In P. Cisek, T. Drew and J. Kalaska (Eds.)
Computational Neuroscience: Theoretical Insights into Brain Function.
[pdf of final draft]
Hinton, G. E. (2007)
Learning Multiple Layers of Representation.
Trends in Cognitive Sciences, Vol. 11, pp 428-434.
Hinton, G. E. (2014)
Where do features come from?.
Cognitive Science, Vol. 38(6), pp 1078-1101.
Hinton, G. E., Sabour, S. and Frosst, N.
Matrix Capsules with EM Routing
Kiros, J. R., Chan, W. and Hinton, G. E.
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search
Anil, R., Pereyra, G., Passos, A., Ormandi, R., Dahl, G. and Hinton, G. E.
Large scale distributed neural network training through online distillation
Guan, M. Y., Gulshan, V., Dai, A. M. and Hinton, G. E.
Who Said What: Modeling Individual Labelers Improves Classification
Sabour, S., Frosst, N. and Hinton, G. E.
Dynamic Routing between Capsules
Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton,
G., & Dean, J. (2017)
Outrageously large neural networks: The
sparsely-gated mixture-of-experts layer
arXiv preprint arXiv:1701.06538
Ba, J. L., Hinton, G. E., Mnih, V., Leibo, J. Z. and Ionescu,
Using Fast Weights to Attend to the Recent Past
arXiv preprint arXiv:1610.06258v2
Ba, J. L., Kiros, J. R. and Hinton, G. E. (2016)
Deep Learning Symposium, NIPS-2016,
arXiv preprint arXiv:1607.06450
Joseph Turian's map of 2500 English words produced by using t-SNE on
the word feature vectors learned by Collobert & Weston, ICML 2008
Doing analogies by using vector algebra on word embeddings