Skip to main content Accessibility help
×
Home
  • Print publication year: 2008
  • Online publication date: June 2012

8 - Pseudorandom Generators

Summary

Indistinguishable things are identical.

G. W. Leibniz (1646–1714)

A fresh view at the question of randomness has been taken by Complexity Theory: It has been postulated that a distribution is random (or rather pseudorandom) if it cannot be told apart from the uniform distribution by any efficient procedure. Thus, (pseudo) randomness is not an inherent property of an object, but is rather subjective to the observer.

At the extreme, this approach says that the question of whether the world is deterministic or allows for some free choice (which may be viewed as sources of randomness) is irrelevant. What matters is how the world looks to us and to various computationally bounded devices. That is, if some phenomenon looks random, then we may just treat it as if it were random. Likewise, if we can generate sequences that cannot be told apart from the uniform distribution by any efficient procedure, then we can use these sequences in any efficient randomized application instead of the ideal coin tosses that are postulated in the design of this application.

The pivot of the foregoing approach is the notion of computational indistinguishability, which refers to pairs of distributions that cannot be told apart by efficient procedures. The most fundamental incarnation of this notion associates efficient procedures with polynomial-time algorithms, but other incarnations that restrict attention to other classes of distinguishing procedures also lead to important insights.

Related content

Powered by UNSILO