Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
11 - Search in Machine Learning
Published online by Cambridge University Press: 30 April 2024
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
Summary
The earliest programs were entirely hand coded. Both the algorithm and the knowledge that the algorithm embodied were created manually. Machines that learn were always on the wish list though. One of the earliest reported programs was the checkers playing program by Arthur Samuel that went on to beat its creator, evoking the spectre of Frankenstein's monster, a fear which still echoes today among some. Since then machine learning (ML) has steadily advanced due to three factors. First, the availability of vast amounts of data that the internet has made possible. Second, the tremendous increase in computing power available. And third, a continuous evolution of algorithms. But the core of ML is to process data using first principles and incrementally build models about the domain that the data comes from. In this chapter we look at this process.
The computer is ideally suited to learning. It can never forget. The key is to incorporate a ratchet mechanism à la natural selection – a mechanism to encapsulate the lessons learnt into a usable form, a model. Robustness demands that one must build in the ability to withstand occasional mistakes. Because the outlier must not become the norm.
Children, doctors, and machines – they all learn. A toddler touches a piece of burning firewood and is forced to withdraw her hand immediately. She learns to curb her curiosity and pay heed to adult supervision. As she grows up, she picks up motor skills like cycling and learns new languages. Doctors learn from their experience and become experts at their job – in fact, the words ‘expert’ and ‘experience’ are derived from the same root. The smartphone you hold in your hand learns to recognize your voice and handwriting and also tracks your preferences for recommending books, movies, and food outlets in ways that often leave you pleasantly surprised. This chapter is about how we can make machines learn. We also illustrate how such learning is intimately related to the broader class of search methods explored in the rest of this book.
Let us consider a simple example: the task of classifying an email as spam or non-spam. Given the ill-defined nature of the problem, it is hard for us to arrive at a comprehensive set of rules that can do this discrimination.
- Type
- Chapter
- Information
- Search Methods in Artificial Intelligence , pp. 367 - 392Publisher: Cambridge University PressPrint publication year: 2024