Book contents
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
11 - On-line Learning of Prototypes and Principal Components
Published online by Cambridge University Press: 28 January 2010
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
Summary
Abstract
We review our recent investigation of on–line unsupervised learning from high–dimensional structured data. First, on–line competitive learning is studied as a method for the identification of prototype vectors from overlapping clusters of examples. Specifically, we analyse the dynamics of the well–known winner–takes–all or K–means algorithm. As a second standard learning technique, the application of Sanger's rule for principal component analysis is investigated. In both scenarios the necessary process of student specialization may be delayed significantly due to underlying symmetries.
Introduction
Methods from statistical physics have been applied to the theory of adaptive systems with great success in recent years. Perhaps the most prominent example is the analysis of feedforward neural networks which can learn from example data. The statistical mechanics approach allows to investigate typical properties of very large systems on average over the randomness contained in the data. It complements results from computational learning theory and other disciplines.
Most of the investigations concern the supervised learning of a rule. For reviews of the field see for instance (Watkin et al. 1993; Opper and Kinzel, 1996). A particularly successful line of research was initiated in (Kinzel and Rujan, 1990; Kinouchi and Caticha, 1992) and aims at the analysis of the physics of on–line learning schemes (Amari, 1967 and 1993; Hertz et al., 1991). On–line learning is attractive from a practical point of view because it uses only the latest in the sequence of examples. Obviously storage needs and computational effort are reduced in comparison with batch– or off–line learning (Hertz et al., 1991).
- Type
- Chapter
- Information
- On-Line Learning in Neural Networks , pp. 231 - 250Publisher: Cambridge University PressPrint publication year: 1999
- 1
- Cited by