Book contents
10 - Resampling methods
Published online by Cambridge University Press: 05 June 2012
Summary
However beautiful the strategy, you should occasionally look at the results.
Winston ChurchillIntroduction
In this chapter we discuss several basic computational tools, methods that have broad use in the data analysis community for a wide range of purposes. This is the class of sample reuse schemes, also known as resampling methods. These include cross-validation and the leave-one-out method.
Another key example in this class is the bootstrap, and in this text we use it extensively for obtaining improved estimates of prediction error rates. The method itself is very simple and easily explained. Why it often works so well is not so simple to explain but we make an effort, since improved comprehension in this area will help the reader apply the method in other data analysis contexts and sharpen awareness of its pitfalls.
We next consider much more computationally intense methods for improving on the apparent error rate, and for finding low-bias, low-variance estimates of almost anything.
Many of the ideas discussed here are ones that we've already needed to invoke in previous chapters, for example in discussing the Random Forests approach to error estimation, variable importance measures. So there is overlap here but this chapter provides additional detail, and is also a starting point for these ideas.
The bootstrap
The bootstrap method is an instance of data resampling. When applied to error analysis it is a generalization of the cross-validation and the leave-one-out schemes: these schemes are discussed here.
- Type
- Chapter
- Information
- Statistical Learning for Biomedical Data , pp. 198 - 214Publisher: Cambridge University PressPrint publication year: 2011