Self-supervised backprop in deep
autoencoders
We can put extra hidden layers between the input
and the bottleneck and between the bottleneck
and the output.
This gives a non-linear generalization of PCA
It should be very good for non-linear
dimensionality reduction.
It is very hard to train with backpropagation
So deep autoencoders have been a big
disappointment.
But we recently found a very effective method of
training them which will be described next week.