Let's proceed with the coin tossing example. We have to formalize our prior. Let to it like this:
%matplotlib inline
from numpy import *
from scipy import stats
from matplotlib.pyplot import *
from numpy.linalg import norm
t = arange(.01, 1, .01)
prior = .1*stats.norm.pdf(t-.5, loc=0, scale=1)+.9*stats.norm.pdf( (t-.5), loc=0, scale=0.03500)
prior /= sum(prior) #Make everything sum up to 1
figure(1)
plot(t,prior)
title("Prior")
This prior indicates that we think that the coin is basically fair, with maybe a slight deviation from 50%. Let's see what happens if we toss the coin 100 times, and get 60 heads.
figure(2)
N = 60
likelihood = (t**N)*((1-t)**(100-N))
plot(t, likelihood)
title("Likelihood")
Now, we can look at the posterior:
posterior = likelihood*prior
figure(3)
plot(t, posterior)
title("Posterior")
What if we got 99 heads out of 100 tries?
N = 100
likelihood = (t**N)*((1-t)**(100-N))
figure(2)
plot(t, likelihood)
title("Likelihood")
posterior = likelihood*prior
posterior /= sum(posterior)
figure(3)
plot(t, posterior)
title("Posterior")
What about 10 out of 10?
N = 10
likelihood = (t**N)*((1-t)**(10-N))
figure(2)
plot(t, likelihood)
title("Likelihood")
posterior = likelihood*prior
posterior /= sum(posterior)
figure(3)
plot(t, posterior)
title("Posterior")
Do we believe that the coin could be so biased? Maybe not. In that case, our prior probability was wrong. Let's make the "wide" Gaussian narrower.
prior = .1*stats.norm.pdf(t-.5, loc=0, scale=.1)+.9*stats.norm.pdf( (t-.5), loc=0, scale=0.03500)
prior /= sum(prior) #Make everything sum up to 1
figure(1)
plot(t,prior)
title("Prior")
N = 10
likelihood = (t**N)*((1-t)**(10-N))
figure(2)
plot(t, likelihood)
title("Likelihood")
posterior = likelihood*prior
posterior /= sum(posterior)
figure(3)
plot(t, posterior)
title("Posterior")