Geoffrey Hinton received his BA in Experimental Psychology from
Cambridge in 1970 and his PhD in Artificial Intelligence from
Edinburgh in 1978. He did postdoctoral work at Sussex
University and the University of California San Diego and spent five
years as a faculty member in the Computer Science department at
Carnegie-Mellon University. He then became a fellow of the Canadian
Institute for Advanced Research and moved to the Department of Computer Science
at the University of Toronto. He spent three years from 1998 until
2001 setting up the Gatsby
Computational Neuroscience Unit at University College London and
then returned to the University of Toronto where he is now an
emeritus distinguished professor. From 2004 until 2013 he was the
director of the program on "Neural Computation
and Adaptive Perception" which is funded by the Canadian Institute for Advanced
Research. From 2013 to 2023 he worked half-time at Google where
he became a Vice President and Engineering Fellow.
Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, the Association for the Advancement of
Artificial Intelligence and a former president of the Cognitive Science Society.
He is an honorary foreign member of the American Academy of Arts and
Sciences, the US National Academy of Engineering and the US
National Academy of Science.
He has received honorary doctorates from the University of
Edinburgh, the University of Sussex, the University of Sherbrooke and
the University of Toronto. His awards include the David
E. Rumelhart prize, the IJCAI award for research
excellence,
the
Killam prize for Engineering ,
The
NSERC Herzberg Gold Medal,
the IEEE Frank Rosenblatt medal,
the IEEE James Clerk Maxwell Gold medal,
the NEC C&C award, the BBVA
award, the Honda Prize, the Princess of Asturias Award and the ACM
Turing Award.
Geoffrey Hinton designs machine learning algorithms. His aim is to
discover a learning procedure that is efficient at finding complex
structure in large, high-dimensional datasets and to show that this is
how the brain learns to see. He was one of the researchers who
introduced the back-propagation
algorithm and the first to use backpropagation for learning word
embeddings. His
other contributions to neural network research include Boltzmann machines, distributed representations, time-delay
neural nets, mixtures of experts,
variational learning, products of experts and deep
belief nets. His research group in Toronto made major
breakthroughs in deep learning that have revolutionized speech
recognition and object classification.