CIAR Summer School Tutorial
Lecture 1b

Sigmoid Belief Nets

Discovering causal structure as a goal for unsupervised learning

Bayes Nets:
Directed Acyclic Graphical models

Ways to define the conditional probabilities

What is easy and what is hard in a DAG?

Explaining away

The learning rule for sigmoid belief nets

The derivatives of the log prob

A coding view

The cost of sending a complete configuration

Minimizing the coding cost

The Free Energy

Sampling from the posterior distribution

Gibbs sampling

The recipe for Gibbs sampling

Computing the posterior for i given the rest

Terms in the global energy

Approximate inference

The Free Energy

A trade-off between how well the model fits the data and the tractability of inference

The wake-sleep algorithm

What the wake phase achieves

The flaws in the wake-sleep algorithm

Mode averaging

Summary