ICML 2009 Workshop on Learning Feature Hierarchies
June 18, 2009
A workshop in conjunction with the 26th International Conference on Machine Learning (ICML 2009)
- Kai Yu , NEC Laboratories America, email@example.com
- Ruslan Salakhutdinov , University of Toronto, firstname.lastname@example.org
- Yann LeCun , New York University, email@example.com
- Geoffrey E. Hinton , University of Toronto, firstname.lastname@example.org
- Yoshua Bengio , University of Montreal, email@example.com
Motivation and Topics:
Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical [Bengio & LeCun, 2007] and biological [Lee et al., 1998] arguments strongly suggest that building such systems requires deep architectures that involve many layers of nonlinear processing. Recent research in machine learning has seen a notable advance in learning feature hierarchies via deep architectures from labeled and unlabeled data. The learned high-level representations have been shown to give promising results in many challenging supervised learning problems, where data patterns often exhibit a high degree of variations. The research is still in its early stage. It is necessary to ask, how can the field progress beyond the current status in both theoretical foundations and empirical applications. In particular, we shall be interested in discussing the following topics:
- Development of learning models: e.g., deep belief nets, deep Boltzmann machines, deep neural nets, high-order sparse coding, and hierarchical generative models.
- Theoretical foundations: Under what conditions does the feature hierarchy achieve a better regularization or statistical efficiency? How can we make deep models be more robust to dealing with highly ambiguous or missing sensory inputs?
- Inference and optimization: Can we develop better optimization or approximation techniques that would allow us to learn feature hierarchies more efficiently without significant human intervention?
- Biologically-inspired models: How to learn biologically inspired feature hierarchies in visual and auditory signal processing?
- Using side information: In unsupervised learning of feature hierarchies, how to explore data structures, e.g. spatial 2D-layout, sequential dynamics, additional auxiliary features?
- Relationships to kernel learning and transfer learning: Can we develop algorithms that are capable of extracting high-level feature representations that can be transferred to unknown future tasks?
- Success in real-world applications: understanding of natural scenes, recognition of objects and events, auditory coding of speech and music, natural language processing, semantic indexing, and retrieval of documents and images.
Please see also the deep learning workshop held at NIPS, Dec 2007.