NIPS 2014 Workshop: Perturbations, Optimization, and Statistics

December 12, 2014. Workshop in conjunction with NIPS 2014.

Submission deadline: Nov 9, 2014.

Confirmed Speakers

Description

In nearly all machine learning tasks, decisions must be made given current knowledge (e.g., choose which label to predict). Perhaps surprisingly, always making the best decision is not always the best strategy, particularly while learning. Recently, there is an emerging body of work on learning under different rules that apply perturbations to the decision procedure. These works provide simple and efficient learning rules with improved theoretical guarantees. This workshop will bring together the growing community of researchers interested in different aspects of this area, and it will broaden our understanding of why and how perturbation methods can be useful.

In the last couple of years, at the highly successful NIPS workshops on Perturbations, Optimization, and Statistics, we looked at how injecting perturbations (whether it be random or adversarial “noise”) into learning and inference procedures can be beneficial. The focus was on two angles: first, on how stochastic perturbations can be used to construct new types of probability models for structured data; and second, how deterministic perturbations affect the regularization and the generalization properties of learning algorithms.

The goal of this workshop is to expand the scope of previous workshops and also explore different ways to apply perturbations within optimization and statistics to enhance and improve machine learning approaches. This year, we would like to look at exciting new developments related to the above core themes.

More generally, we shall specifically be interested in understanding the following issues:

  • Modeling: which models lend efficient learning by perturbations?

  • Regularization: whether randomness can be replaced by other mathematical object while keeping the computational and statistical guarantees?

  • Robust optimization: how stochastic and adversarial perturbations affect the learning outcome?

  • Dropout: How stochastic dropout regularizes online learning tasks?

  • Sampling: new ways to utilize perturbations for sampling from probabilistic models.

Target Audience: The workshop should appeal to NIPS attendees interested in both theoretical aspects such as Bayesian modeling, Monte Carlo sampling, optimization, inference, and learning, as well as practical applications in computer vision and language modeling.

Call for Papers

In addition to a program of invited presentations, we solicit contribution of short papers that explore perturbation-based methods in the context of topics such as: statistical modeling, sampling, inference, estimation, theory, robust optimization, robust learning. We are interested in both theoretical and application-oriented works. We also welcome papers that explore connections between alternative ways of using perturbations.

Contributed papers should adhere to the NIPS format and are encouraged to be up to four pages long (without counting the list of references). Papers submitted for review do not need to be anonymized. There will be no official proceedings. Thus, apart from papers reporting novel unpublished work, we also welcome submissions describing work in progress or summarizing a longer paper under review for a journal or conference (this should be clearly stated though). Accepted papers will be presented as posters; some may also be selected for spotlight talks.

Please submit papers in PDF format by email to posNIPS2014@gmail.com. The submission deadline is Nov 9, 2014 (please email us if you need to be notified of a decision earlier for the sake of travel arrangements). At least one of the authors must be attending the workshop to present the work.

Organizers

Previous Workshops

This is the third year of this workshop. The 2012 POS Workshop website is here. The 2013 POS Workshop website is here.

References

We have assembled below a list of indicative references related to the workshop's theme.

Machine learning

Extreme value statistics

Discrete choice in psychology and economics

Mathematics and statistical physics

Optimization