Neurons in the perceptual system learn to extract a hierarchy of features from images and it is not obvious how they do it. One possibility is that the perceptual system fits a generative model to the sensory data. To allow rapid perception, this model must make it easy to infer the states of the feature detectors given the sensory data. To make the hardware robust, the model must allow redundant "population" codes in which feature detectors do not need to be orthogonal. Causal generative models based on directed acyclic graphs are currently very popular in AI but they fail to meet these conditions. I shall describe a very different type of generative model that satisfies both conditions. The model assumes that there are many different experts who each generate candidate sensory data, but that all the experts must agree in order for the candidate data to be produced as the output. This is hopelessly inefficient as a generative model, but it allows very efficient inference and there is a very simple learning rule for adjusting the parameters of the experts (e.g. the synaptic weights of neurons). When trained on images of handwritten digits the learning rule produces very good models that give state-of-the-art performance at digit recognition. The same approach can be used to fit products of Hidden Markov Models, which can have exponentially more representational power than single Hidden Markov Models.
Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He is currently the director of the Gatsby Computational Neuroscience Unit at University College London. He does research on ways of using neural networks for learning, memory, perception and symbol processing and has over 150 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that is now widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, and Helmholtz machines. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.
Professor Hinton serves on the editorial boards of the journals Artificial Intelligence, Neural Computation, and Cognitive Science. He is a Fellow of the Royal Society of Canada and of the American Association for Artificial Intelligence, a former president of the Cognitive Science Society, and a project leader with the federal Institute for Robotics and Intelligent Systems and the provincial Information Technology Research Centre. In 1992 he won the ITAC/NSERC award for contributions to information technology. Half of his 18 former PhD students and postdoctoral fellows have faculty jobs in universities. A simple introduction to his research can be found in his articles in Scientific American, Sept. 1992 and Oct. 1993.
Time and Location: return to the 2000 Colloquia Series main page.