Skip to main content Accessibility help
×
Home
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 1
  • Print publication year: 2010
  • Online publication date: July 2011

7 - Effects of network structure on associative memory

from Part II - The use of artificial neural networks to elucidate the nature of perceptual processes in animals

Summary

Introduction

The brain has various functions such as memory, learning, awareness, thinking and so on. These functions are produced by the activity of neurons that are connected to each other in the brain. There are many models to reproduce the memory of the brain, and the Hopfield model is one of the most studied (Hopfield, 1982). The Hopfield model was proposed to reproduce associative memory, and it has been studied extensively by physicists because this model is similar to the Ising model of spin glasses. This model was studied circumstantially, for example, the storage capacity was analysed by the replica method (Amit, 1989; Hertz et al., 1991). However, in these studies, the neural networks are completely connected, i.e. each neuron is connected to all other neurons. It was not clear how the properties of the model depend on the connections of neurons until recently (Tosh & Ruxton, 2006a, 2006b).

In recent years the study of complex networks has been paid much attention. A network consists of nodes and links. A node is a site or point on the network such as a neuron; the nodes are connected by links such as an axon or synapse of a neuron. Several characteristic network structures have been proposed, and the small-world and the scale-free networks have been studied heavily in recent years. Small-world networks have the properties that the characteristic path length is very short, and simultaneously the clustering coefficient is large (Watts & Strogatz, 1998).

References
Albert, R. & Barabási, A. L. 2002. Statistical mechanics of complex networks. Rev Mod Phys 74, 47–97.
Amit, D. J. 1989. Modeling Brain Function: The World of Attractor Neural Networks. Cambridge University Press.
Barabási, A. L. & Albert, R. 1999. Emergence of scaling in random networks. Science 286, 509–512.
Barabási, A. L., Albert, R. & Jeong, H. 1999. Mean-field theory for scale-free random networks. Physica A 272, 173–187.
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D. 2006. Complex networks: structure and dynamics. Phys Rep 424, 175–308.
Bohland, J. W. & Minai, A. A. 2001. Efficient associative memory using small-world architecture. Neurocomputing 38–40, 489–496.
Dorogovtsev, S. N., Mendes, J. F. F. & Samukhin, A. N. 2000. Structure of growing networks with preferential linking. Phys Rev Lett 85, 4633–4636.
Eguíluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V. 2005. Scale-free brain functional networks. Phys Rev Lett 94, 018102.
Forrest, B. M. & Wallace, D. J. 1991. Storage capacity and learning in Ising–Spin neural networks. In Models of Neural Networks (ed. Domany, E., Hemmen, J. L. & Schulten, K.), pp. 121–148. Springer-Verlag.
Hertz, J., Krogh, A. & Palmer, R. G. 1991. Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Co.
Hopfield, J. J. 1982. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79, 2554–2558.
Kim, B. J. 2004. Performance of networks of artificial neurons: the role of clustering. Phys Rev E 69, 045101.
Lu, J., He, J., Cao, J. & Gao, Z. 2006. Topology influences performance in the associative memory neural networks. Phys Lett A 354, 335–343.
McCulloch, W. S. & Pitts, W. 1990. A logical calculus of the ideas imminent in nervous activity. Bull Math Biol 52, 99–115.
McGraw, P. N. & Menzinger, M. 2003. Topology and computational performance of attractor neural networks. Phys Rev E 68, 047102.
Morelli, L. G., Abramson, G. & Kuperman, M. N. 2004. Associative memory on a small-world neural network. Eur Phys J B 38, 495–500.
Newman, M. E. J. 2003. The structure and function of complex networks. SIAM Rev 45, 167–256.
Oshima, H. & Odagaki, T. 2007. Storage capacity and retrieval time of small-world neural networks. Phys Rev E 76, 036114.
Oshio, K., Iwasaki, Y., Morita, S.et al. 2003. Database of Synaptic Connectivity of C. elegans for Computation. Technical Report of CCeP, Keio Future No. 3. Keio University. http://www.bio.keio.ac.jp/ccep/
Stauffer, D., Aharony, A., Costa, L. D. & Adler, J. 2003. Efficient Hopfield pattern recognition on a scale-free neural network. Eur Phys J B 32, 395–399.
Strogatz, S. H. 2003. Sync: The Emerging Science of Spontaneous Order. Hyperion.
Tosh, C. R. & Ruxton, G. D. 2006a. Artificial neural network properties associated with wiring patterns in the visual projections of vertebrates and arthropods. Am Natl 168, E38–E52.
Tosh, C. R. & Ruxton, G. D. 2006b. Introduction. The use of artificial neural networks to study perception in animals. Phil Trans R Soc B 362, 337–338.
Watts, D. J. & Strogatz, S. H. 1998. Collective dynamics of ‘small-world’ networks. Nature 393, 440–442.