Published online by Cambridge University Press: 05 July 2014
Machine learning has successfully led to many promising tools for intelligent data filtering, processing, and interpretation. Naturally, proper metrics will be required in order to objectively evaluate the performance of machine learning tools. To this end, this chapter will address the following subjects.
It is commonly agreed that the testing accuracy serves as a more reasonable metric for the performance evaluation of a learned classifier. Section A.1 discusses several cross-validation (CV) techniques for evaluating the classification performance of the learned models.
Section A.2 explores two important test schemes: the hypothesis test and the significance test.
Suppose that the dataset under consideration has N samples to be used for training the classfier model and/or estimating the classfication accuracy. Before the training phase starts, a subset of training dataset must be set aside as the testing dataset. The class labels of the test patterns are assumed to be unknown during the learning phase. These labels will be revealed only during the testing phase in order to provide the necessary guideline for the evaluation of the performance.
Some evaluation/validation methods are presented as follows.
(i) Holdout validation. N' (N' < N) samples are randomly selected from the dataset for training a classifier, and the remaining N – N' samples are used for evaluating the accuracy of the classifier. Typically, N' is about two-thirds of N. Holdout validation solves the problem of which biased estimation occurs in re-substitution by completely separating the training data from the validation data.