11 - Learning equations
Published online by Cambridge University Press: 05 June 2012
Summary
Neurons in the central nervous system form a complex network with a high degree of plasticity. In the previous chapter we have discussed synaptic plasticity from a phenomenological point of view. We now ask “what are the consequences for the connectivity between neurons if synapses are plastic?”. To do so we consider a scenario known as unsupervised learning. We assume that some of the neurons in the network are stimulated by input with certain statistical properties. Synaptic plasticity generates changes in the connectivity pattern that reflect the statistical structure of the input. The relationship between the input statistics and the synaptic weights that evolve due to Hebbian plasticity is the topic of this chapter. We start in Section 11.1 with a review of unsupervised learning in a rate-coding paradigm. The extension of the analysis to spike-time-dependent synaptic plasticity is made in Section 11.2. We will see that spike-based learning naturally accounts for spatial and temporal correlations in the input and can overcome some of the problems of a simple rate-based learning rule.
Learning in rate models
We would like to understand how activity-dependent learning rules influence the formation of connections between neurons in the brain. We will see that plasticity is controlled by the statistical properties of the presynaptic input that is impinging on the postsynaptic neuron. Before we delve into the analysis of the elementary Hebb rule, we therefore need to recapitulate a few results from statistics and linear algebra.
- Type
- Chapter
- Information
- Spiking Neuron ModelsSingle Neurons, Populations, Plasticity, pp. 387 - 420Publisher: Cambridge University PressPrint publication year: 2002