What kind of a Graphical Model

is the Brain?

**Two types of unsupervised
neural network**

**The learning rule for
sigmoid belief nets**

**Why learning is hard in a
sigmoid belief net.**

**How a Boltzmann Machine
models data**

**The Energy of a joint
configuration**

**Using energies to define
probabilities**

**Four reasons why learning is
impractical
in Boltzmann Machines**

**A picture of the Boltzmann
machine learning algorithm for an RBM**

**Contrastive divergence
learning:
A quick way to learn an RBM**

**Using an RBM to learn a
model of a digit class**

**The weights learned by the
100 hidden units**

**A surprising relationship
between Boltzmann Machines and Sigmoid Belief Nets**

**Using complementary priors
to eliminate explaining away**

**An example of a
complementary prior**

**Inference in a DAG with
replicated weights**

**Learning by dividing and
conquering**

**Another way to divide and
conquer**

**Pro’s and con’s of
replicating the weights**

**Multilayer contrastive
divergence**

**A simplified version with
all hidden layers the same size**

**Why the hidden
configurations should be treated as data when learning the next layer of
weights**

**A neural network model of
digit recognition**

**Examples of correctly
recognized MNIST test digits (the 49 closest calls)**

**Learning with realistic
labels**

**A different way to capture
low-dimensional manifolds**

**The flaws in the wake-sleep
algorithm**

**The up-down algorithm:
A contrastive divergence version of wake-sleep**

**The receptive fields of the
first hidden layer**

**The generative fields of the
first hidden layer**

**Independence relationships
of hidden variables
in three types of model**