Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
Appendix B - An intriguing idea
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
Summary
It is easy to produce a probability function that exhibits a very steep transition between the values 0 and 1. Take for instance a binary tree corresponding to the exploration graph of a two-player game with a constant branching factor b. Each node in the tree represents a position, and each edge a possible move for a player from one position to a next position. Some games do indeed offer only exactly b possibilities at each time to the current player.
Suppose further, as it is usually the case, that the computer whose turn it is to play does not have sufficient time or memory space to explore the whole tree of possibilities. Then, the standard approach is for the computer to develop the tree to a given depth, say 10, and then to evaluate the merit of each position and to carry up these estimations through the celebrated min–max procedure. If a node represents the computer's turn to play, the maximum value of the nodes below is returned and passed above, otherwise the minimal value is passed above.
One question is then how to compute the probability of a “win” at the root of the tree given the probability that a leaf node is a win.
- Type
- Chapter
- Information
- Phase Transitions in Machine Learning , pp. 351 - 354Publisher: Cambridge University PressPrint publication year: 2011