 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
|
Set up a Markov
Chain that samples from the existing
|
|
|
model.
|
|
|
|
|
The
samples can then be used to get a noisy
|
|
|
estimate
of the last term in the derivative
|
|
|
|
|
The
chain may need to run for a long
time before the
|
|
|
fantasies
it produces have the correct distribution.
|
|
|
|
For uni-gauss
experts we can set up a Markov chain by
|
|
|
sampling the
hidden state of each expert.
|
|
|
|
|
The
hidden state is whether it used the Gaussian or
|
|
|
the
uniform.
|
|
|
|
|
The
experts hidden states can be sampled in parallel
|
|
|
|
This
is a big advantage of products of experts.
|
|