We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In May 2019 we launched a special exhibition at the Uganda Museum in Kampala titled “The Unseen Archive of Idi Amin.” It consisted of 150 images made by government photographers in the 1970s. In this essay we explore how political history has been delimited in the Museum, and how these limitations shaped the exhibition we curated. From the time of its creation, the Museum's disparate and multifarious collections were exhibited as ethnographic specimens, stripped of historical context. Spatially and organizationally, “The Unseen Archive of Idi Amin” turned its back on the ethnographic architecture of the Uganda Museum. The transformation of these vivid, evocative, aesthetically appealing photographs into historical evidence of atrocity was intensely discomfiting. We have been obliged to organize the exhibition around categories that did not correspond with the logic of the photographic archive, with the architecture of the Museum, or with the experiences of the people who lived through the 1970s. The exhibition has made history, but not entirely in ways that we chose.
Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88) presented a critique of our recently published paper in Cell Reports entitled ‘Large-Scale Cognitive GWAS Meta-Analysis Reveals Tissue-Specific Neural Expression and Potential Nootropic Drug Targets’ (Lam et al., Cell Reports, Vol. 21, 2017, 2597–2613). Specifically, Hill offered several interrelated comments suggesting potential problems with our use of a new analytic method called Multi-Trait Analysis of GWAS (MTAG) (Turley et al., Nature Genetics, Vol. 50, 2018, 229–237). In this brief article, we respond to each of these concerns. Using empirical data, we conclude that our MTAG results do not suffer from ‘inflation in the FDR [false discovery rate]’, as suggested by Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88), and are not ‘more relevant to the genetic contributions to education than they are to the genetic contributions to intelligence’.
Evolutionary economics sees the economy as always in motion with change being driven largely by continuing innovation. This approach to economics, heavily influenced by the work of Joseph Schumpeter, saw a revival as an alternative way of thinking about economic advancement as a result of Richard Nelson and Sidney Winter's seminal book, An Evolutionary Theory of Economic Change, first published in 1982. In this long-awaited follow-up, Nelson is joined by leading figures in the field of evolutionary economics, reviewing in detail how this perspective has been manifest in various areas of economic inquiry where evolutionary economists have been active. Providing the perfect overview for interested economists and social scientists, readers will learn how in each of the diverse fields featured, evolutionary economics has enabled an improved understanding of how and why economic progress occurs.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
The formal representation of the history-friendly models presented some notable issues, first of all because of the huge amount of variables and parameters defining the models: some of these elements were common or at least analogous across the models, while others referred to completely different domains. In order to reduce the number of the main symbols to a manageable size, we adapted from computer programming languages the idea of overloading notation: a main symbol can have slightly different meanings according to the presence or absence of further details, such as superscripts and subscripts. For example, the symbol T indicates the total number of periods of a simulation, Tk indicates the period of introduction of technology k, and TI the minimum number of periods a firm has to stay integrated after its decision to switch to internal production of components. In general, we use as subscripts the indices for elements (products, firms, markets, technologies) that take different values, without changing the meaning of the main symbol. Instead, we use as superscripts further identifiers of the main symbols that are not instances of a general category: for example, PT is the symbol of patents and E is the symbol of exit. In a very limited number of cases an identifier can be used both as a subscript identifier (TR and MP in most of the cases are used as instances of component technology k) and as a superscript identifier (TR and MP are used as superscripts of the main symbol α, as they refer to different parts of the same equation).
Upper and lowercase letters are considered as different, although whenever it is possible they take related meanings: for example, i indicates the propensity to integrate and I the corresponding probability. The symbols used for specific variables and parameters are not used across models, unless these variables and parameters have the same or a very similar meaning and role in the different models. The values that parameters take and the range of values that heterogeneous parameters and variables can take are indicated in the tables in the Appendices.
This chapter presents a “history-friendly” model of the evolution of the pharmaceutical industry, and in particular of the so-called golden age. This industry is an ideal subject for such an analysis, especially because it has characteristics and problems that provide both contrasts and similarities with the computer industry. Like computers, the pharmaceutical industry has traditionally been a highly R&D and marketing-intensive sector and it has undergone a series of radical technological and institutional “shocks.” However, despite these shocks, the core of the leading innovative firms and countries has remained stable for a very long period of time. Entry by new firms has been a rather rare occurrence until the advent of biotechnology. However, while the evolution of computers coincides largely with the history of very few firms, that of pharmaceuticals involves at least a couple of dozens of companies. Further, the degree of concentration has been consistently low at the aggregate level and the industry has never experienced a shakeout of producers.
We argue that the observed patterns of the evolutionary dynamics were shaped by three main factors, related both to the nature of the relevant technological regime and the structure of demand:
(1) The nature of the process of drug discovery, in terms of the properties of the space of technological opportunities and of the search procedures by which firms explore it. Specifically, innovation processes were characterized for a long time by “quasi-random” search procedures (random screening), with little positive spillovers from one discovery to the next (low cumulativeness).
(2) The type of competition and the role of patents and imitation in shaping gains from innovation. Patents gave temporary monopoly power to the innovator, but competition remained strong nevertheless, sustained by processes of “inventing around” and – after a patent expires – by imitation.
(3) The fragmented nature of the relevant markets. The industry comprises many independent submarkets, which correspond broadly to different therapeutic classes. For example, cardiovascular products do not compete with antidepressants. And, given the quasi-random nature of the innovative process, innovation in one therapeutic class typically does little to enhance innovation opportunities in other markets.
Having sketched the domain of our inquiry, we turn now to the task of bringing new light to that domain. While the empirical phenomena discussed in the first chapter are complex and variegated, the broad subject matter of innovation and industrial evolution is familiar to a great many economists. Most of our readers know what it is about. On the other hand, the promise of “history-friendly modeling” as an approach to that subject matter is less widely appreciated. A principal purpose of this book is to lay out this new methodological approach and to demonstrate its usefulness. In this chapter, we develop the basic argument for this approach, which we then apply in the remainder of the book. We explain our reasons for believing that history-friendly modeling is a valuable addition to the economists’ analytic tool kit. In doing so, we discuss considerations ranging from the extremely general – “what forms of economic theory help us to understand the world” – to the very specific – “how does one construct a history-friendly model?” We follow the indicated order, from the general to the specific.
COMPLEMENTARY APPROACHES TO MODELING: “FORMAL” vs. “APPRECIATIVE” THEORY
Our development of the concept and technique of history-friendly modeling reflects our belief that present notions of what constitutes “good theory” in economics are inadequate to the task of analyzing and illuminating the kind of phenomena this book addresses. Theorizing about complex phenomena involves, first of all, discriminating between the most salient facts that need to be explained, and phenomena that can be ignored as not relevant or interesting, and second, identifying the most important determining variables, and the mechanisms or relationships between these and the salient phenomena to be explained. This requires screening out or “holding constant” variables and mechanisms that seem relatively unimportant, at least given the questions being explored. There is probably little disagreement about these general points. Regarding the principles for assessing theories, there is more controversy.
The illumination provided by a theory can be assessed on at least two grounds. The criterion most recognized by economists today is its power to suggest hypotheses that will survive sophisticated empirical tests, where confirmation is found in the statistical significance levels achieved in the tests and in measures of “goodness of fit.”