For many classification and regression problems, a large number of features are available for possible use - this is typical of DNA microarray data on gene expression, for example. Often, for computational or other reasons, only a small subset of these features are selected for use in a model, based on some simple measure such as correlation with the response variable. This procedure may introduce an optimistic bias, however, in which the response variable appears to be more predictable than it actually is, because the high correlation of the selected features with the response may be partly or wholely due to chance. We show how this bias can be avoided when using a Bayesian model for the joint distribution of features and response. The crucial insight is that even if we forget the exact values of the unselected features, we should retain, and condition on, the knowledge that their correlation with the response was too small for them to be selected. In this paper we describe how this idea can be implemented for ``naive Bayes'' models of binary data. Experiments with simulated data confirm that this method avoids bias due to feature selection. We also apply the naive Bayes model to subsets of data relating gene expression to colon cancer, and find that correcting for bias from feature selection does improve predictive performance.
Technical Report No. 0705, Dept. of Statistics, University of Toronto (February 2007), 21 pages: postscript, pdf.
Also available from arXiv.org.
You can also get the software used for the tests in this paper.
Li, L., Zhang, J., and Neal, R. M. (2008) ``A method for avoiding bias from feature selection with application to naive Bayes classification models'', Bayesian Analysis, vol. 3, pp. 171-196: abstract, pdf.A version of work also appears as Chapter 2 of Longhai Li's PhD thesis.