The central question posed by the so called ‘logical problem of language acquisition’ is how it comes to be that children are able to GENERALIZE from a finite set of linguistic data to acquire (learn, develop, grow) a computational system (grammar) that applies to novel examples not encountered before. The difficulty of this generalization problem was first posed cogently by Gold and while Macwhinney discusses the Gold framework and the linguistic literature on this matter, it is worth noting that the Gold framework is not the only one. There are at least two important new sources of insight from computational learning theory in the decades following Gold that need to be kept in mind. First, there is the development of empirical process theory that forms the basis of any analysis of statistical learning (see summary in Vapnik, 1998). Applying this approach to language (see Niyogi, 1998 for a treatment), one concludes that the family of learnable grammars must have a finite Vapnik Chervonenkis (VC) dimension. The VC dimension is a combinatorial measure of the complexity of a class of functions. Grammars may be viewed as functions mapping sentences to their grammaticality value. In this more sophisticated sense of the VC dimension, the class of grammars must be constrained. Second, there is the development of the theory of computational complexity suggesting that while a learning algorithm might exist, it may not be efficient, i.e. run in polynomial time. These two developments come together in the influential Probably Approximately Correct (PAC) model (Valiant, 1984).