Program

 

Andrew Ng
Unsupervised discovery of structure, succinct representations, and sparsity

We describe a class of unsupervised learning methods that learn sparse representations of the training data, and thereby identify useful features. Further, we show that deep learning (multilayer) versions of these ideas, ones based on sparse DBNs, learn rich feature hierarchies, including part-whole decompositions of objects. Central to this is the idea of ``probabilistic max pooling,'' which allows us to implement convolutional DBNs at a large scale, while maintaining probabilistically sound semantics. In the case of images, at the lowest level this method learns to detect edges; at the next level, it puts together edges to form ``object parts''; and finally, at the highest level puts together object parts to form whole ``object models.'' The features this method learns are useful for a wide range of tasks, including object recognition, text classication, and audio classification. We also present the result of comparing a two-layer version of the model (trained on natural images) to visual cortical areas V1 and V2 in the brain (the first and second stages of visual processing in the cortex). Finally, we'll conclude with a discussion on some open problems and directions for future research.


Brief Bio.

Andrew Ng is an Assistant Professor of Computer Science at Stanford University. His research interests include machine learning, reinforcement learning/control, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship, and the IJCAI 2009 Computers and Thought award.