Skip to main content Accessibility help
×
Home
  • This chapter is unavailable for purchase
  • Print publication year: 2009
  • Online publication date: June 2012

20 - Derandomization

from PART THREE - ADVANCED TOPICS

Summary

God does not play dice with the universe.

–Albert Einstein

Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.

–John von Neumann, quoted by Knuth, 1981

Randomization is an exciting and powerful paradigm in computer science and, as we saw in Chapter 7, often provides the simplest or most efficient algorithms for many computational problems. In fact, in some areas of computer science, such as distributed algorithms and cryptography, randomization is proven to be necessary to achieve certain tasks or achieve them efficiently. Thus it's natural to conjecture (as many scientists initially did) that at least for some problems, randomization is inherently necessary: One cannot replace the probabilistic algorithm with a deterministic one without a significant loss of efficiency. One concrete version of this conjecture would be that BPPP (see Chapter 7 for definition of BPP). Surprisingly, recent research has provided more and more evidence that this is likely to be false. As we will see in this chapter, under very reasonable complexity assumptions, there is in fact a way to derandomize (i.e., transform into a deterministic algorithm) every probabilistic algorithm of the BPP type with only a polynomial loss of efficiency. Thus today most researchers believe that BPP = P. We note that this need not imply that randomness is useless in every setting–we already saw in Chapter 8 its crucial role in the definition of interactive proofs.

Related content

Powered by UNSILO