To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Antenna-pattern measurements obtained from a double-metal supra-terahertz-frequency (supra-THz) quantum cascade laser (QCL) are presented. The QCL is mounted within a mechanically micro-machined waveguide cavity containing dual diagonal feedhorns. Operating in continuous-wave mode at 3.5 THz, and at an ambient temperature of ~60 K, QCL emission has been directed via the feedhorns to a supra-THz detector mounted on a multi-axis linear scanner. Comparison of simulated and measured far-field antenna patterns shows an excellent degree of correlation between beamwidth (full-width-half-maximum) and sidelobe content and a very substantial improvement when compared with unmounted devices. Additionally, a single output has been used to successfully illuminate and demonstrate an optical breadboard arrangement associated with a future supra-THz Earth observation space-borne payload. Our novel device has therefore provided a valuable demonstration of the effectiveness of supra-THz diagonal feedhorns and QCL devices for future space-borne ultra-high-frequency Earth-observing heterodyne radiometers.
Analyzing audiovisual communication is challenging because its content is highly symbolic and less rule-governed than verbal material. But audiovisual messages are important to understand: they amplify, enrich, and complicate the meaning of textual information. We describe a fully-reproducible approach to analyzing video content using minimally—but systematically—trained online workers. By aggregating the work of multiple coders, we achieve reliability, validity, and costs that equal those of traditional, intensively trained research assistants, with much greater speed, transparency, and replicability. We argue that measurement strategies relying on the “wisdom of the crowd” provide unique advantages for researchers analyzing complex and intricate audiovisual political content.
Evolutionary economics sees the economy as always in motion with change being driven largely by continuing innovation. This approach to economics, heavily influenced by the work of Joseph Schumpeter, saw a revival as an alternative way of thinking about economic advancement as a result of Richard Nelson and Sidney Winter's seminal book, An Evolutionary Theory of Economic Change, first published in 1982. In this long-awaited follow-up, Nelson is joined by leading figures in the field of evolutionary economics, reviewing in detail how this perspective has been manifest in various areas of economic inquiry where evolutionary economists have been active. Providing the perfect overview for interested economists and social scientists, readers will learn how in each of the diverse fields featured, evolutionary economics has enabled an improved understanding of how and why economic progress occurs.
Postoperative cognitive impairment is among the most common medical complications associated with surgical interventions – particularly in elderly patients. In our aging society, it is an urgent medical need to determine preoperative individual risk prediction to allow more accurate cost–benefit decisions prior to elective surgeries. So far, risk prediction is mainly based on clinical parameters. However, these parameters only give a rough estimate of the individual risk. At present, there are no molecular or neuroimaging biomarkers available to improve risk prediction and little is known about the etiology and pathophysiology of this clinical condition. In this short review, we summarize the current state of knowledge and briefly present the recently started BioCog project (Biomarker Development for Postoperative Cognitive Impairment in the Elderly), which is funded by the European Union. It is the goal of this research and development (R&D) project, which involves academic and industry partners throughout Europe, to deliver a multivariate algorithm based on clinical assessments as well as molecular and neuroimaging biomarkers to overcome the currently unsatisfying situation.
There is now a clear focus on incorporating, and integrating, multiple levels of analysis in developmental science. The current study adds to research in this area by including markers of the immune and neuroendocrine systems in a longitudinal study of temperament in infants. Observational and parent-reported ratings of infant temperament, serum markers of the innate immune system, and cortisol reactivity from repeated salivary collections were examined in a sample of 123 infants who were assessed at 6 months and again when they were, on average, 17 months old. Blood from venipuncture was collected for analyses of nine select innate immune cytokines; salivary cortisol collected prior to and 15 min and 30 min following a physical exam including blood draw was used as an index of neuroendocrine functioning. Analyses indicated fairly minimal significant associations between biological markers and temperament at 6 months. However, by 17 months of age, we found reliable and nonoverlapping associations between observed fearful temperament and biological markers of the immune and neuroendocrine systems. The findings provide some of the earliest evidence of robust biological correlates of fear behavior with the immune system, and identify possible immune and neuroendocrine mechanisms for understanding the origins of behavioral development.
Research organizations face challenges in creating infrastructures that cultivates and sustains interdisciplinary team science. The objective of this paper is to identify structural elements of organizations and training that promote team science.
We qualitatively analyzed the National Institutes of Health’s Building Interdisciplinary Research Careers in Women’s Health, K12 using organizational psychology and team science theories to identify organizational design factors for successful team science and training.
Seven key design elements support team science: (1) semiformal meta-organizational structure, (2) shared context and goals, (3) formal evaluation processes, (4) meetings to promote communication, (5) role clarity in mentoring, (6) building interpersonal competencies among faculty and trainees, and (7) designing promotion and tenure and other organizational processes to support interdisciplinary team science.
This application of theory to a long-standing and successful program provides important foundational elements for programs and institutions to consider in promoting team science.
The formal representation of the history-friendly models presented some notable issues, first of all because of the huge amount of variables and parameters defining the models: some of these elements were common or at least analogous across the models, while others referred to completely different domains. In order to reduce the number of the main symbols to a manageable size, we adapted from computer programming languages the idea of overloading notation: a main symbol can have slightly different meanings according to the presence or absence of further details, such as superscripts and subscripts. For example, the symbol T indicates the total number of periods of a simulation, Tk indicates the period of introduction of technology k, and TI the minimum number of periods a firm has to stay integrated after its decision to switch to internal production of components. In general, we use as subscripts the indices for elements (products, firms, markets, technologies) that take different values, without changing the meaning of the main symbol. Instead, we use as superscripts further identifiers of the main symbols that are not instances of a general category: for example, PT is the symbol of patents and E is the symbol of exit. In a very limited number of cases an identifier can be used both as a subscript identifier (TR and MP in most of the cases are used as instances of component technology k) and as a superscript identifier (TR and MP are used as superscripts of the main symbol α, as they refer to different parts of the same equation).
Upper and lowercase letters are considered as different, although whenever it is possible they take related meanings: for example, i indicates the propensity to integrate and I the corresponding probability. The symbols used for specific variables and parameters are not used across models, unless these variables and parameters have the same or a very similar meaning and role in the different models. The values that parameters take and the range of values that heterogeneous parameters and variables can take are indicated in the tables in the Appendices.
This chapter presents a “history-friendly” model of the evolution of the pharmaceutical industry, and in particular of the so-called golden age. This industry is an ideal subject for such an analysis, especially because it has characteristics and problems that provide both contrasts and similarities with the computer industry. Like computers, the pharmaceutical industry has traditionally been a highly R&D and marketing-intensive sector and it has undergone a series of radical technological and institutional “shocks.” However, despite these shocks, the core of the leading innovative firms and countries has remained stable for a very long period of time. Entry by new firms has been a rather rare occurrence until the advent of biotechnology. However, while the evolution of computers coincides largely with the history of very few firms, that of pharmaceuticals involves at least a couple of dozens of companies. Further, the degree of concentration has been consistently low at the aggregate level and the industry has never experienced a shakeout of producers.
We argue that the observed patterns of the evolutionary dynamics were shaped by three main factors, related both to the nature of the relevant technological regime and the structure of demand:
(1) The nature of the process of drug discovery, in terms of the properties of the space of technological opportunities and of the search procedures by which firms explore it. Specifically, innovation processes were characterized for a long time by “quasi-random” search procedures (random screening), with little positive spillovers from one discovery to the next (low cumulativeness).
(2) The type of competition and the role of patents and imitation in shaping gains from innovation. Patents gave temporary monopoly power to the innovator, but competition remained strong nevertheless, sustained by processes of “inventing around” and – after a patent expires – by imitation.
(3) The fragmented nature of the relevant markets. The industry comprises many independent submarkets, which correspond broadly to different therapeutic classes. For example, cardiovascular products do not compete with antidepressants. And, given the quasi-random nature of the innovative process, innovation in one therapeutic class typically does little to enhance innovation opportunities in other markets.
Having sketched the domain of our inquiry, we turn now to the task of bringing new light to that domain. While the empirical phenomena discussed in the first chapter are complex and variegated, the broad subject matter of innovation and industrial evolution is familiar to a great many economists. Most of our readers know what it is about. On the other hand, the promise of “history-friendly modeling” as an approach to that subject matter is less widely appreciated. A principal purpose of this book is to lay out this new methodological approach and to demonstrate its usefulness. In this chapter, we develop the basic argument for this approach, which we then apply in the remainder of the book. We explain our reasons for believing that history-friendly modeling is a valuable addition to the economists’ analytic tool kit. In doing so, we discuss considerations ranging from the extremely general – “what forms of economic theory help us to understand the world” – to the very specific – “how does one construct a history-friendly model?” We follow the indicated order, from the general to the specific.
COMPLEMENTARY APPROACHES TO MODELING: “FORMAL” vs. “APPRECIATIVE” THEORY
Our development of the concept and technique of history-friendly modeling reflects our belief that present notions of what constitutes “good theory” in economics are inadequate to the task of analyzing and illuminating the kind of phenomena this book addresses. Theorizing about complex phenomena involves, first of all, discriminating between the most salient facts that need to be explained, and phenomena that can be ignored as not relevant or interesting, and second, identifying the most important determining variables, and the mechanisms or relationships between these and the salient phenomena to be explained. This requires screening out or “holding constant” variables and mechanisms that seem relatively unimportant, at least given the questions being explored. There is probably little disagreement about these general points. Regarding the principles for assessing theories, there is more controversy.
The illumination provided by a theory can be assessed on at least two grounds. The criterion most recognized by economists today is its power to suggest hypotheses that will survive sophisticated empirical tests, where confirmation is found in the statistical significance levels achieved in the tests and in measures of “goodness of fit.”
This book is about innovation and the evolution of industries. It is the result of more than a decade of exciting collaboration and intense interaction among the four of us. Although we have been publishing articles on this topic over the years, the book represents an original contribution, in that the chapters are new or revised significantly from previously published articles. This book is also novel in that for the first time it provides the reader with a consistent, integrated and complete view of the nature and value of what we call “history-friendly” models, which aim at a deeper and more articulated theoretical analysis as well as empirical understanding of the dynamics of technologies, market structure and industries.
It all started during the nineties, as the four of us met at conferences in Europe and the United States. While listening to presentations and discussing papers, we were always impressed by the richness of industry and firm case studies, which told complex dynamic stories and highlighted the key role played by technological and organizational capabilities and learning in innovation and the evolution of industries. Often, powerful qualitative theories lay behind these cases. In the late nineties, Malerba and Orsenigo developed detailed studies of the evolution of respectively the computer industry and pharmaceuticals for the book Dick was putting together with David Mowery, The Sources of Industrial Leadership (Cambridge University Press, 1999). During this time in our meetings with Sid, we often discussed the industry histories that were being put together for the book.
Thus it was natural for the four of us to start talking about the relationship between the rich qualitative theories that were associated with the industry histories and the then prevalent terse and compact modeling of industry dynamics. We started to discuss how formal models could complement appreciative theory and histories. So, the idea was launched to start a research project that would try to capture and represent in a formal way the gist and richness of the different patterns of industrial evolution as described in the histories that we were familiar with, and the theories that went with these histories, by developing models that would highlight the specific dynamics of those sectors.
In this chapter we develop a history-friendly model of the development of the US computer industry in the latter half of the twentieth century. The history is marked by a pattern of emergent concentration and subsequent de-concentration, from the industry's birth based on mainframe computers through the advent of personal computers (PC). Our principal analytical purpose in the chapter is to illuminate the explanation for this pattern. We begin in Section 3.2 by laying out the relevant features of that history, the appreciative theorizing about that history and the challenges for history-friendly modeling. In Section 3.3 we develop the model. In Sections 3.4 and 3.5 we display some history-replicating and history-divergent simulations and discuss the major factors affecting the specific evolution of the computer industry and competition among firms. In Section 3.6 we draw our conclusions.
THE EVOLUTION OF COMPUTER TECHNOLOGY AND THE COMPUTER INDUSTRY
A stylized history
A detailed recounting of the industry's history is beyond the scope and purpose of this chapter. We offer only a stylized history of computer technology and the industry, drawing from Flamm (1988), Langlois (1990), Bresnahan and Greenstein (1999) and especially Bresnahan and Malerba (1999).
The computer industry's history shows continuous improvements in machines that serve particular groups of users, punctuated from time to time by the introduction of significant new component technologies that permit the needs of existing users to be better addressed, but also open up the possibility of serving new market segments. In the United States these punctuations were associated with the entry of new firms, which almost always were the first to venture into the new market. However, this happened to a significantly lesser degree in Europe, and hardly at all in Japan.
The evolution of the industry divides naturally into four periods. The first began with the early experimentation, which culminated in designs sufficiently attractive to induce large firms with massive computation tasks, as well as scientific laboratories, to purchase computers. This opened the era of the mainframes. The second period began with the introduction of integrated circuits and the development of minicomputers. The third era is that of the PC, made possible by the invention of the microprocessor.
This book is about technological progress and its relationships with competition and the evolution of industry structures. It presents a new approach to the analysis of these issues, which we have labeled “history-friendly” modeling. This research stream began more than a decade ago and various papers have been published over the years. Here, we build on those initial efforts to develop a comprehensive and integrated framework for a systematic analysis of innovation and industry evolution.
The relationships among technological change, competition and industry evolution are old and central questions in industrial economics and the economics of innovation, a subject matter that dates back to Marshall and of course to Schumpeter. We authors are indeed Schumpeterians in that we believe the hallmark feature of modern capitalism is that it induces, even compels, firms to be innovative in industries where technological opportunities exist and customers are responsive to new or improved products. The evolution of these industries – like computers or semiconductors – is often characterized by the emergence of a monopolist or of a few dominant firms. The speed at which concentration develops varies drastically, however, across sectors and over time, and, often, monopoly power is not durable. In other significant industries – e.g. pharmaceuticals – no firm actually succeeded in achieving such an undisputed leadership. In some cases, the characteristic drift toward concentration is interrupted by significant exogenous change, such as new technologies appearing from outside the sector.
Long ago, Schumpeter proposed that the turning-over of industrial leadership was a common feature in industries where technological innovation was an important vehicle of competition. In recent years economists studying technological change have come to recognize a number of other important connections between the evolution of technologies and the dynamics of industries’ structure. Progress in this area has come from different sources. The availability of large longitudinal databases at a very high level of disaggregation has allowed researchers to unveil robust stylized facts in industrial dynamics and to conduct thorough statistical analyses, which show strong inter-industry regularities, but also deep and persistent heterogeneity across and within industries. New sophisticated models have been created that attempt to explain the regularities.