NIPS 2013 Workshop: Perturbations, Optimization, and Statistics
December 9, 2013 at Lake Tahoe, Nevada, U.S.A.
Workshop in conjunction with NIPS 2013.
Schedule: Check out the workshop's program here.
Schedule
The schedule is here.
The workshop will consist of a series of invited talks
as well as poster, spotlight, and short oral presentation of contributed
papers.
Confirmed Speakers
Description
In nearly all machine learning tasks, decisions must be made given current knowledge (e.g., choose which label to predict). Perhaps surprisingly, always making the best decision is not always the best strategy, particularly while learning. Recently, there is an emerging body of work on learning under different rules that apply perturbations to the decision procedure. These works provide simple and efficient learning rules with improved theoretical guarantees. This workshop will bring together the growing community of researchers interested in different aspects of this area, and it will broaden our understanding of why and how perturbation methods can be useful.
Last year, at the highly successful 2012 NIPS workshop on Perturbations, Optimization, and Statistics, we looked at how injecting perturbations (whether it be random or adversarial “noise”) into learning and inference procedures can be beneficial. The focus was on two angles: first, on how stochastic perturbations can be used to construct new types of probability models for structured data; and second, how deterministic perturbations affect the regularization and the generalization properties of learning algorithms.
The goal of this workshop is to expand the scope of last year and also explore different ways to apply perturbations within optimization and statistics to enhance and improve machine learning approaches. This year, we will (a) look at exciting new developments related to the above core themes, and (b) emphasize their implications on topics that received less coverage last year, specifically highlighting connections to decision theory, risk analysis, game theory, and economics.
More generally, we shall specifically be interested in understanding the following issues:
Repeated games and online learning: How to understand random perturbations in the context of exploring unseens options in repeated games? How to exploit connections to Bayesian risk?
Adversarial Uncertainty: How to play complex games with adversarial uncertainty? What are the computational qualities of such solutions, and do Nashequilibria exists in these cases?
Stochastic risk: How to average predictions with random perturbations to get improved generalization guarantees? How stochastic perturbations imply approximated Bayesian risk and regularization?
Dropout: How stochastic dropout regularizes learning of complex models and what is its generalization power? What are the relationships between stochastic and adversarial dropouts?
Robust optimization: In what ways can learning be improved by perturbing the input measurements?
Choice theory: What is the best way to use perturbations to compensate lack of knowledge? What lessons in modeling can machine learning take from random utility theory?
Theory: How does the maximum of a random process relate to its complexity? How can the maximum of random perturbations be used to measure the uncertainty of a system?
Target Audience: The workshop should appeal to NIPS attendees interested in
both theoretical aspects such as Bayesian modeling, Monte Carlo sampling,
optimization, inference, and learning, as well as practical applications in
computer vision and language modeling.
Call for Papers (Now Closed)
In addition to a program of invited presentations, we solicit contribution of
short papers that explore perturbationbased methods in the context of topics
such as: statistical modeling, sampling, inference, estimation, theory, robust
optimization, robust learning. We are interested in both theoretical and
applicationoriented works. We also welcome papers that explore connections
between alternative ways of using perturbations.
Contributed papers should adhere to the
NIPS format and are encouraged to
be up to four pages long (without counting the list of references). Papers
submitted for review do not need to be anonymized. There will be no official
proceedings. Thus, apart from papers reporting novel unpublished work, we also
welcome submissions describing work in progress or summarizing a longer paper
under review for a journal or conference (this should be clearly stated
though). Accepted papers will be presented as posters; some may also be
selected for spotlight talks.
Please submit papers in PDF format by email to posNIPS2013@gmail.com. The
submission deadline is October 9 25, 2013 and notifications
of acceptance will be sent by October 23, 2013. At least one of the authors
must be attending the workshop to present the work.
Organizers
Last Year's Workshop
This is the second year of this workshop. The 2012 POS Workshop website is here.
References
We have assembled below a list of indicative references related to the workshop's theme.
Machine learning

Efficient algorithms for online decision problems (J. of Comp. and System Sci., 2005)
A. Kalai, S. Vempala

Extracting and composing robust features with denoising autoencoders (ICML, 2008)
P. Vincent, H. Larochelle, Y. Bengio, P. Manzagol

Herding dynamical weights to learn (ICML, 2009)
M. Welling

Gaussian sampling by local perturbations (NIPS, 2010)
G. Papandreou, A. Yuille

PerturbandMAP random fields: Using discrete optimization to learn and sample from energy models (ICCV, 2011)
G. Papandreou, A. Yuille

Robust MaxProduct Belief Propagation (arXiv, 2011)
M. Ibrahimi, A. Javanmard, Y. Kanoria, A. Montanari

Robust Optimization and Machine Learning (book chapter, 2011)
C. Caramanis and S. Mannor and H. Xu

Randomized Optimum Models for Structured Prediction (AISTATS, 2012)
D. Tarlow, R. Adams, R. Zemel

On the Partition Function and Random Maximum APosteriori Perturbations (ICML, 2012)
T. Hazan, T. Jaakkola

A Simple Geometric Interpretation of SVM using Stochastic Adversaries (AISTATS, 2012)
R. Livni, K. Crammer, A. Globerson
Extreme value statistics
Discrete choice in psychology and economics
Mathematics and statistical physics

Information, Physics, and Computation (Oxford Univ. Press, 2009)
M. Mezard and A. Montanari

The supremum of some canonical processes (Am. J. of Math, 1994)
M. Talagrand

Random weighting, asymptotic counting, and inverse isoperimetry (Isr. J. Math, 2007)
A Barvinok, A. Samorodnitsky

Cover times, blanket times, and majorizing measures (arXiv, 2010)
J. Ding, J.R. Lee, Y. Peres

Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice (CACM, 2009)
D.A. Spielman, S. Teng

Online VertexWeighted Bipartite Matching and Singlebid Budgeted Allocations (arXiv10)
G. Aggarwal, G. Goel, C. Karande, A. Mehta
Optimization

Fast approximate energy minimization via graph cuts (PAMI, 2001)
Y. Boykov, O. Veksler, R. Zabih

Dynamic programming and graph algorithms in computer vision (PAMI, 2011)
P.F. Felzenszwalb, R. Zabih

Introduction to dual decomposition for inference (MIT Press, 2011)
D. Sontag, A. Globerson, T. Jaakkola

Optimization for Machine Learning (MIT Press, 2011)
S. Sra, S. Nowozin, S. Wright., Editors, .

Markov Random Fields for Vision and Image Processing (MIT Press, 2011)
A. Blake, P. Kohli, and C. Rother (eds.)

Stochastic Programming (Kluwer, 1995)
A. Prekopa

Robust Optimization (Princeton Univ. Press, 2009)
A. Bental, L. El Ghaoui, A. Nemirovski
