Skip to main content Accessibility help
×
Home
  • Print publication year: 2014
  • Online publication date: July 2014

5 - Unsupervised learning for cluster discovery

from Part III - Unsupervised learning models for cluster analysis

Summary

Introduction

The objective of cluster discovery is to subdivide a given set of training data, X ≡ (x1, x2,…, xN}, into a number of (say K) subgroups. Even with unknown class labels of the training vectors, useful information may be extracted from the training dataset to facilitate pattern recognition and statistical data analysis. Unsupervised learning models have long been adopted to systematically partition training datasets into disjoint groups, a process that is considered instrumental for classification of new patterns. This chapter will focus on conventional clustering strategies with the Euclidean distance metric. More specifically, it will cover the following unsupervised learning models for cluster discovery.

• Section 5.2 introduces two key factors – the similarity metric and clustering strategy – dictating the performance of unsupervised cluster discovery.

• Section 5.3 starts with the basic criterion and develops the iterative procedure of the K-means algorithm, which is a common tool for clustering analysis. The convergence property of the K-means algorithm will be established.

• Section 5.4 extends the basic K-means to a more flexible and versatile expectation-maximization (EM) clustering algorithm. Again, the convergence property of the EM algorithm will be treated.

• Section 5.5 further considers the topological property of the clusters, leading to the well-known self-organizing map (SOM).

• Section 5.6 discusses bi-clustering methods that allow simultaneous clustering of the rows and columns of a data matrix.