CIAR Summer School Tutorial

Lecture 2b

Learning a Deep Belief Net

**A neural network model of
digit recognition**

**Learning by dividing and
conquering**

**Another way to divide and
conquer**

**Why its hard to learn one
layer at a time**

**Using complementary priors
to eliminate explaining away**

**An example of a
complementary prior**

**Inference in a DAG with
replicated weights**

**A picture of the Boltzmann
machine learning algorithm for an RBM**

**Pro’s and con’s of
replicating the weights**

**Contrastive divergence
learning:
A quick way to learn an RBM**

**Multilayer contrastive
divergence**

**A simplified version with
all hidden layers the same size**

**Why the hidden
configurations should be treated as data when learning the next layer of
weights**

**Examples of correctly
recognized MNIST test digits (the 49 closest calls)**

**The flaws in the wake-sleep
algorithm**

**The up-down algorithm:
A contrastive divergence version of wake-sleep**

**The receptive fields of the
first hidden layer**