26
Deep Autoencoders
(Ruslan Salakhutdinov)
28x28
      1000  neurons
They always looked like a really
nice way to do non-linear
dimensionality reduction:
But it is very difficult to
optimize deep autoencoders
using backpropagation.
We now have a much better way
to optimize them:
First train a stack of 4 RBM’s
Then “unroll” them.
Then fine-tune with backprop.
500 neurons
250 neurons
30
250 neurons
500 neurons
      1000  neurons
28x28