Skip to main content Accessibility help
  • Print publication year: 2006
  • Online publication date: January 2010

3 - Cluster analysis



Cluster analysis is an approach that finds structure in data by identifying natural groupings (clusters) in the data. Unfortunately ‘natural groupings’ is not as well defined as we might hope. Indeed, it is usual to have more than one natural grouping for any collection of data. As we will see, there is no definitive cluster analysis technique, instead the term relates to a rather loose collection of algorithms that group similar objects into categories (clusters). Although some clustering algorithms have been present in ‘standard’ statistical software packages for many years, they are rarely used for formal significance testing. Instead they should be viewed as EDA tools because they are generally used to generate, rather than test, hypotheses about data structures.

A cluster is simply a collection of cases that are more ‘similar’ to each other than they are to cases in other clusters. This intentionally vague definition is common; for example, Sneath and Sokal (1973) noted that vagueness was inevitable given the multiplicity of different definitions while Kaufman and Rousseeuw (1990) referred to cluster analysis as the ‘art of finding groups’.

If an analysis produces obvious clusters it may be possible to name them and summarise the cluster characteristics. Consequently, the biggest gains are likely in knowledge-poor environments, particularly when there are large amounts of unlabelled data. Indeed clustering techniques can be viewed as a way of generating taxonomies for the classification of objects.