Book contents
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
3 - Pattern Learning
Published online by Cambridge University Press: 25 October 2017
- Frontmatter
- Dedication
- Contents
- List of Figures
- Preface and Acknowledgments
- Introduction: Abstract Models of Learning
- 1 Consistency and Symmetry
- 2 Bounded Rationality
- 3 Pattern Learning
- 4 Large Worlds
- 5 Radical Probabilism
- 6 Reflection
- 7 Disagreement
- 8 Consensus
- Appendix A Inductive Logic
- Appendix B Partial Exchangeability
- Appendix C Marley's Axioms
- Bibliography
- Index
Summary
As I have already indicated, we may think of a system of inductive logic as a design for a “learning machine”: that is to say, a design for a computing machine that can extrapolate certain kinds of empirical regularities from the data with which it is supplied. Then the criticism of the so-far-constructed “c-functions” is that they correspond to “learning machines” of very low power. They can extrapolate the simplest possible empirical generalizations, for example: “approximately nine-tenths of the balls are red,” but they cannot extrapolate so simple a regularity as “every other ball is red.”
Hilary Putnam Probability and ConfirmationTo approach the type of reflection that seems to characterize inductive reasoning as encountered in practical circumstances, we must widen the scheme and also consider partial exchangeability.
Bruno de Finetti Probability, Statistics and InductionOne of the main criticisms of Carnap's inductive logic that Hilary Putnam has raised – alluded to in the epigraph – is that it fails in situations where inductive inference ought to go beyond relative frequencies. It is a little ironic that Carnap and his collaborators could have immediately countered this criticism if they had been more familiar with the work of Bruno de Finetti, who had introduced a formal framework that could be used for solving Putnam's problem already in the late 1930s. De Finetti's central innovation was to use symmetries that generalize exchangeability to various notions of partial exchangeability for inductive inference of patterns.
The goal of this chapter is to show how generalized symmetries can be used to overcome the inherent limitations of order invariant learning models, such as the Johnson–Carnap continuum of inductive methods or the basic model of reinforcement learning. What we shall see is that learning procedures can be modified so as to be able to recognize in principle any finite pattern.
Taking Turns
Order invariant learning rules collapse when confronted with the problem of learning how to take turns. Taking turns is important whenever a learning environment is periodic.
- Type
- Chapter
- Information
- The Probabilistic Foundations of Rational Learning , pp. 56 - 76Publisher: Cambridge University PressPrint publication year: 2017