4 - Complexity theory
Published online by Cambridge University Press: 06 July 2010
Summary
THE COMPLEXITY OF A PROBLEM
Our aim remains to explicate and justify the Law of Small Probability. Two pillars undergird this law, one probabilistic, the other complexity–theoretic. In the last chapter we elucidated the probabilistic pillar. Here we elucidate the complexity–theoretic pillar. Complexity theory, like probability theory, is a theory of measurement. Whereas probability theory measures the likelihood of an event, complexity theory measures the difficulty of a problem. Specifically, complexity theory measures how difficult it is to solve a problem Q given certain resources R. To see how complexity theory works in practice, let us examine the most active area of research currently within complexity theory, namely, computational complexity theory.
Computational complexity pervades every aspect of computer science. Whatever the computational problem, a programmer has to consider how the available computational resources (= R) contribute to solving the problem (= Q). If a problem is intractable, the programmer won't want to waste time trying to solve it. Intractability occurs either if the problem has no algorithm that allows it even in principle to be solved on a computer, or if all the algorithms that solve the problem consume so many computational resources as to make solving the problem impracticable (either by requiring too much time or memory). Programmers therefore have a vital stake in computational complexity theory. By definition computational complexity theory handles one task: inputting algorithms and outputting the computational resources needed to run those algorithms.
- Type
- Chapter
- Information
- The Design InferenceEliminating Chance through Small Probabilities, pp. 92 - 135Publisher: Cambridge University PressPrint publication year: 1998