Multiple comparison methods are described. It is noted that they have always been controversial, partly because they emphasize testing at the expense of estimation, partly because they pay no regard to the purpose of the investigation, partly because there are so many competing forms and, not least, because they can lead to illogical conclusions. There are many identified instances where they have been found misleading.
An alternative approach is to designate ‘contrasts of interest’ from the start and to concentrate estimation and testing upon them. In many experiments the approach is powerful and definite in use, but sometimes there is no reason to designate one contrast rather than another, for example, in the assessment of new strains or new chemicals. In such circumstances some have found multiple comparisons useful, especially when the problem is to ‘pick the winner’. Bayesian methods and cluster analysis are considered briefly as other alternatives.
The current over-use of multiple comparisons is deplored. It is thought to arise in part from bad teaching and in part from the general reluctance of non-statisticians to venture into the unknown territory of specifying contrasts. A bad situation is made worse by the availability of software that carries out multiple comparisons as a matter of course.