Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Copyright Permissions
- 1 Introduction
- Part I Methods for Optimal Solutions
- Part II Methods for Near-optimal and Approximation Solutions
- Part III Methods for Efficient Heuristic Solutions
- 10 An efficient technique for mixed-integer optimization
- 11 Metaheuristic methods
- Part IV Other Topics
- References
- Index
11 - Metaheuristic methods
from Part III - Methods for Efficient Heuristic Solutions
Published online by Cambridge University Press: 05 May 2014
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Copyright Permissions
- 1 Introduction
- Part I Methods for Optimal Solutions
- Part II Methods for Near-optimal and Approximation Solutions
- Part III Methods for Efficient Heuristic Solutions
- 10 An efficient technique for mixed-integer optimization
- 11 Metaheuristic methods
- Part IV Other Topics
- References
- Index
Summary
One day your life will flash before your eyes. Make sure it is worth watching.
UnknownReview of key results in metaheuristic methods
In this chapter, we discuss another class of heuristics, which are known as metaheuristic methods [36]. An iteration in metaheuristic methods typically aims to improve the current feasible solution, with the initial solution given by the user. Some well-known metaheuristic methods are iterative improvement, simulated annealing, tabu search, and genetic algorithms [36]. For certain type of problems, metaheuristic methods could be very effective.
The so-called iterative improvement (or basic local search) method tries to find a better solution in each iteration by searching in the neighborhood of the current solution, and terminates when a better solution cannot be found. It has been shown that the performance of iterative improvement methods for combinatorial optimization problems may not be satisfactory [19]. This can be explained by the fact that this method tends to stop as soon as it finds a local optimum.
Compared to iterative improvement, simulated annealing (SA) [1] has an explicit strategy to escape from local optima. The basic idea of SA is to allow a move (with a probability) even if it may tentatively result in a solution of worse quality than the current solution. There is also a cooling procedure in SA, which decreases such randomness (or diversification) as time passes. As the cooling proceeds, SA gradually converges to a simple iterative improvement algorithm, which guarantees convergence. The performance of SA is sensitive to the initial solution and the neighborhood structure (in addition to the cooling procedure).
- Type
- Chapter
- Information
- Applied Optimization Methods for Wireless Networks , pp. 262 - 280Publisher: Cambridge University PressPrint publication year: 2014