GP:  Bayesian modelling using Gaussian processes.

The 'gp' programs implement Bayesian models for regression and
classification tasks that are based on Gaussian process priors over
functions.  For details, see Carl Rasmussen's thesis, Evaluation of
Gaussian Process and Other Methods for Non-linear Regression, and my
technical report on "Monte Carlo implementation of Gaussian process
models for Bayesian regression and classification".

A Gaussian process specifies a distribution over functions from some
number of inputs to some number of real outputs.  (For the Gaussian
processes implemented here, the functions for different outputs are
independent.)  In the Bayesian models based on these Gaussian
processes, the functions in question are used to define a model for
the target attributes in training and test cases given the values of
nthe input attributes in these cases.  The Gaussian process gives the
prior distribution over these functions.

A Gaussian process is specified by giving its covariance function,
which is expressed in terms of "hyperparameters", which are themselves
given prior distributions.  (The means for the Gaussian processes
implemented here are always zero.)  To see how the form of the
covariance function and the priors for the hyperparameters are
specified, see gp-spec.doc.

The models based on these Gaussian processes are described using a
general scheme that is also used for neural network models, see
model-spec.doc for details.  Survival models are not allowed.

            Copyright (c) 1996, 1998 by Radford M. Neal