Perceptrons
“Perceptrons” describes a whole family of learning
machines, but the standard type consisted of a layer of
fixed non-linear basis functions followed by a simple
linear discriminant function.
They were introduced in the late 1950’s and they had
a simple online learning procedure.
Grand claims were made about their abilities. This led
to lots of controversy.
Researchers in symbolic AI emphasized their
limitations (as part of an ideological campaign against
real numbers, probabilities, and learning)
Support Vector Machines are just perceptrons with a
clever way of choosing the non-adaptive, non-linear
basis functions and a better learning procedure.
They have all the same limitations as perceptrons in
what types of function they can learn.
But people seem to have forgotten this.