Book contents
- Frontmatter
- Contents
- Preface
- I An Introduction to the Techniques
- 1 An Introduction to Approximation Algorithms
- 2 Greedy Algorithms and Local Search
- 3 Rounding Data and Dynamic Programming
- 4 Deterministic Rounding of Linear Programs
- 5 Random Sampling and Randomized Rounding of Linear Programs
- 6 Randomized Rounding of Semidefinite Programs
- 7 The Primal-Dual Method
- 8 Cuts and Metrics
- II Further Uses of the Techniques
- Appendix A Linear Programming
- Appendix B NP-Completeness
- Bibliography
- Author Index
- Subject Index
5 - Random Sampling and Randomized Rounding of Linear Programs
from I - An Introduction to the Techniques
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- I An Introduction to the Techniques
- 1 An Introduction to Approximation Algorithms
- 2 Greedy Algorithms and Local Search
- 3 Rounding Data and Dynamic Programming
- 4 Deterministic Rounding of Linear Programs
- 5 Random Sampling and Randomized Rounding of Linear Programs
- 6 Randomized Rounding of Semidefinite Programs
- 7 The Primal-Dual Method
- 8 Cuts and Metrics
- II Further Uses of the Techniques
- Appendix A Linear Programming
- Appendix B NP-Completeness
- Bibliography
- Author Index
- Subject Index
Summary
Sometimes it turns out to be useful to allow our algorithms to make random choices; that is, the algorithm can flip a coin, or flip a biased coin, or draw a value uniformly from a given interval. The performance guarantee of an approximation algorithm that makes random choices is then the expected value of the solution produced relative to the value of an optimal solution, where the expectation is taken over the random choices of the algorithm.
At first this might seem like a weaker class of algorithm. In what sense is there a performance guarantee if it holds only in expectation? However, in most cases we will be able to show that randomized approximation algorithms can be derandomized: that is, we can use a certain algorithmic technique known as the method of conditional expectations to produce a deterministic version of the algorithm that has the same performance guarantee as the randomized version. Of what use then is randomization? It turns out that it is often much simpler to state and analyze the randomized version of the algorithm than to state and analyze the deterministic version that results from derandomization. Thus, randomization gains us simplicity in our algorithm design and analysis, while derandomization ensures that the performance guarantee can be obtained deterministically.
- Type
- Chapter
- Information
- The Design of Approximation Algorithms , pp. 99 - 136Publisher: Cambridge University PressPrint publication year: 2011