Learning Methods

List of pointers to descriptions and implementations of learning methods in Delve.

BackUpForwardDelve

U of T

An important note to users with version 1.0 of the software.

--------------------------------------------------------------------

This page gives an overview of the learning methods that have been contributed to Delve. Under the individual methods you will find definitions, implementations and results of applying these methods to some of the Delve datasets. The descriptions should give enough detail that you could reproduce the results after reading the documentation. The methods are grouped according to the type of learning to which they are applicable. Some methods appear in multiple groups. Each method has been given a unique Delve name.

The results and implentation are contained in tar files for each method. The tar files can be accessed from the individual page describing the method or by clicking on the results anchor below.

Supervised Learning: Regression

  1. 1nn-1 download results
    One nearest neighbour based on Euclidean distance. Contributed by Radford Neal.
  2. base-1 download results
    Provides a base-line of performance that can be obtained by completely ignoring the inputs attributes, basing prediction solely on simple statistics regarding the targets in training cases - namely, the mean and median of the training targets. Contributed by Radford Neal.
  3. gp-map-1 download results
    Gaussian processes for regression trained with a maximum-aposteriori approach implemented with conjugate gradient optimization. Contributed by Carl Edward Rasmussen.
  4. gp-mc-1 download results
    Gaussian processes for regression trained using a fully Bayesian approach using an MCMC implementation. Contributed by Carl Edward Rasmussen.
  5. hme-el-1 download results
    Hierarchical mixtures-of-experts trained using ensemble learning. Contributed by Steve Waterhouse
  6. hme-ese-1 download
    Hierarchical mixtures-of-experts trained using early stopping. Contributed by Steve Waterhouse
  7. hme-grow-1 download results
    Hierarchical mixtures-of-experts trained using growing and early stopping. Contributed by Steve Waterhouse
  8. knn-cv-1 download results
    K-nearest neighbours for regression using leave-one-out cross-validation to select K. The uniformly weighted average of the neighbours are used for predictions. Contributed by Carl Edward Rasmussen.
  9. lin-1 download results
    Linear least squares regression. Contributed by Carl Edward Rasmussen.
  10. mars3.6-bag-1 download results
    Multivariate Adaptive Regression Splines (MARS) version 3.6 with Bagging. MARS was written by Jerome Friedman; a front-end for Bagging was added by Michael Revow.
  11. me-el-1 download results Mixtures-of-experts trained using ensemble learning. Contributed by Steve Waterhouse
  12. me-ese-1 download results
    Mixtures-of-experts trained using early stopping. Contributed by Steve Waterhouse
  13. mlp-bgd-1 download results
    mlp-bgd-2 download results
    mlp-bgd-2b download results
    mlp-bgd-3 download results
    Variations on multilayer perceptron networks trained by batch gradient descent with early stopping ensembles, with and without methods for adapting to varying relevance of inputs. These methods were written by Radford Neal.
  14. mlp-ese-1 download results
    Multilayer perceptron ensembles trained with early stopping. The ensemble consists of networks with identical architectures; fully connected with a single hidden layer of hyperbolic tangent units. Trained using conjugate gradient optimization. Contributed by Carl Edward Rasmussen.
  15. mlp-mc-1 download results
    Multilayer perceptron networks trained by Bayesian learning using MCMC methods. Designed by Carl Edward Rasmussen, using software written by Radford Neal.
  16. mlp-mc-2 download results
    mlp-mc-2b download results
    mlp-mc-3 download results
    mlp-mc-3b download results
    mlp-mc-4 download results
    mlp-mc-4b download results
    Variations on multilayer perceptron networks trained by Bayesian learning using MCMC methods, with and without Automatic Relevance Determination. These methods were written by Radford Neal.
  17. mlp-mdl-vh download results
    Multilayer perceptron networks trained using Minimum Description Length (MDL) principals with a variable number of hidden units. Contributed by Michael Revow.
  18. mlp-mdl-3h download results
    Multilayer perceptron networks trained using Minimum Description Length (MDL) principals with fixed number of hidden units. Contributed by Michael Revow.
  19. mlp-wd-1 download results
    Multilayer perceptron networks trained using weight decay. Michael Revow.

Supervised Learning: Classification

  1. 1nn-1 download results
    One nearest neighbour based on Euclidean distance. Contributed by Radford Neal.
  2. base-1 download results
    Provides a base-line of performance that can be obtained by completely ignoring the input attributes, basing prediction solely on simple statistics regarding the targets in training cases - namely, the frequencies of target classes. Contributed by Radford Neal.
  3. cart-1 download results
    A basic Classification and regression tree implementation.
  4. knn-class-1 download results
    A K-nearest neighbour implementation for classification.
  5. mlp-bgd-1 download results
    mlp-bgd-2 download results
    Variations on multilayer perceptron networks trained by batch gradient descent with early stopping ensembles, with and without methods for adapting to varying relevance of inputs. These methods were written by Radford Neal.
  6. mlp-mc-2 download results
    mlp-mc-3 download results
    mlp-mc-4 download results
    Variations on multilayer perceptron networks trained by Bayesian learning using MCMC methods, with and without Automatic Relevance Determination. These methods were written by Radford Neal.

Unsupervised Learning

Currently there are no methods for unsupervised learning in Delve.


Last Updated 21 May 1998
Comments and questions to: delve@cs.toronto.edu
Copyright