Hostname: page-component-848d4c4894-xfwgj Total loading time: 0 Render date: 2024-07-04T18:52:13.086Z Has data issue: false hasContentIssue false

An entropy concentration theorem: applications in artificial intelligence and descriptive statistics

Published online by Cambridge University Press:  14 July 2016

Claudine Robert*
Affiliation:
Faculté de Médecine de Grenoble
*
Postal address: Laboratoire de Biostatistiques, Faculté de Médecine, Domaine de la Merci, 38700 La Tronche, France.

Abstract

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1990 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arbib, M. A. (1987) Brains, Machines and Mathematics. Springer-Verlag, Berlin.Google Scholar
Azencott, R. (1980) Grandes déviations: théorèmes à la Cramer-Chernoff et petites perturbations de systèmes dynamiques. Ecole d'Eté de Probabilités de Saint-Flour VIII, 1978. Lecture Notes in Mathematiques 774, Springer-Verlag, Berlin.Google Scholar
Barra, J. R. (1971) Notions fondamentales de statistique mathématique. Dunod, Paris.Google Scholar
Bretagnolle, J. (1979) Formule de Chernoff pour les lois empiriques de variables à valeurs dans des espaces généraux. Astérisque 68, Grandes déviations et applications statistiques, Seminaire d'Orsay, 1977-1978, Société Mathématique de France.Google Scholar
Cottrell, M. (1988) Modélisation des réseaux de neurones par des chaînes de Markov et autres applications. Thèse d'Etat, Orsay, France.Google Scholar
Csiszar, I. (1984) Sanov property, generalized I-projection and a conditional limit theorem. Ann. Prob. 12, 768793.Google Scholar
Demongeot, J. (1981) Asymptotic inference for Markov random fields. Springer Series in Synergetics 9, 254267.Google Scholar
Demongeot, J., Goles, E. and Tchuente, M. (1985) Dynamical Systems and Cellular Automata. Academic Press, New York.Google Scholar
Ellis, R. S. (1985) Entropy, Large Deviations and Statistical Mechanics. Springer-Verlag, Berlin.CrossRefGoogle Scholar
Fisher, R. A. (1936) The use of multiple measurements in taxonomic problems. Ann. Eugenics 7, 179188.Google Scholar
Geman, S. and Geman, D. (1984) Stochastic relaxation, Gibbs distribution, and Bayesian restoration. IEEE Trans. ??MI 6, 721730.Google Scholar
Jaynes, E. T. (1982) On the rationale of maximum entropy methods. Proc. IEEE 70, 939952.CrossRefGoogle Scholar
Kagan, A. M., Linnik, Y. M. and Rao, C. R. (1973) Characterisation Problems in Mathematical Statistics. Wiley, New York.Google Scholar
Kim, , and Pearl, J. (1983) A computational model for combined causal and diagnostic reasoning. Inference Systems, Proc. IJCAI, 190193.Google Scholar
Kindermann, R. and Snell, L. (1980) Markov random fields and their applications. Contemporary Mathematics, AMS, Vol. 1.Google Scholar
Robert, C. (1989) Modèles statistiques pour l'intelligence artificielle. Masson, Paris.Google Scholar
Robert, C. (1990) Observable networks; a Markov field approach for probabilistic reasoning. To appear.Google Scholar
Van Campenhout, J. M. and Cover, T. (1981) Maximum entropy and conditional probability. IEEE Trans. Inf. Theory 27, 483489.Google Scholar