Method home for mlp-bgd-1

The mlp-bgd-1 method does regression and classification using a multilayer perceptron neural network with one hidden layer, trained by batch gradient descent. The training data is divided into four equal parts, and four training runs are done, each on 3/4 of the data. The 1/4 remaining in each run is used as a validation set to determine the best stopping point. Predictions are made with the resulting ensemble of four networks. See the notes for more details.

Software

This method uses the software for flexible Bayesian modeling written by Radford Neal (release of 1997-06-22), available from Radford Neal's home page.

Results

Directory listing of the results (and source files) available for the mlp-bgd-1 method. Put the desired files in the appropriate methods directory in your delve hierarchy and uncompress them with using the "gunzip *.gz" command and untar them using "tar -xvf *.tar".

Related References

Neal, R. M. (1998) ``Assessing relevance determination methods using DELVE'', to appear in C. M. Bishop (ed) Generalization in Neural Networks and Machine Learning, Springer-Verlag.
Last Updated 20 May 1998
Comments and questions to: delve@cs.toronto.edu
Copyright