Making weak learners stronger
Suppose you have a weak learning module (a “base
classifier”) that can always get 0.5+epsilon correct when
given a two-way classification task
That seems like a weak assumption but beware!
Can you apply this learning module many times to get a
strong learner that can get close to zero error rate on the
training data?
Theorists showed how to do this and it actually led to
an effective new learning procedure (Freund &
Shapire, 1996)