To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the most popular approaches in the theoretical measurement and empirical estimation of the efficiency of various economic systems is known as Data Envelopment Analysis, abbreviated as DEA. This approach is rooted in and cohesive with theoretical economic modeling via the so-called Activity Analysis Models and is estimated via the powerful linear programming approach.
In this chapter, we consider a variety of models that can be used to estimate particular types of technologies: constant, nonincreasing and variable returns to scale, convex and non-convex technologies. This chapter does not exhaust everything that has been suggested in the literature – fulfilling such a task would be practically impossible in one chapter. The goal is more modest, yet practically valuable: we focus on the most popular methods and consider their “step-by-step construction,” intuition, some of the most important properties, some interesting variations and modifications, etc. We pay attention to aspects that we consider very useful for a reader to advance in his/her own research and, possibly, advance the frontier of the research.
INTRODUCTION TO ACTIVITY ANALYSIS MODELING
An economist's approach to thinking about nonparametric efficiency measurement can be viewed through the so-called Activity Analysis Models – a way of mathematically modeling production relationships. An activity analysis model (AAM) can be defined as a set of mathematical formulations designed to mimic a technology set from the observed data of some real-world production process of interest. The best way to understand such modeling is to actually build a few AAMs.
There are two fundamental assumptions behind most AAMs. The first fundamental assumption we will always make for AAMs in this book is that all decision making units (DMUs) have access to the same technology (which can be characterized by the technology set T that satisfies the main regularity axioms; see Chapter 1). This assumption is important to justify the estimation of one frontier from the full sample – often called the (observed) best practice frontier for the population represented by that sample. Note that this assumption does not imply that all firms have the same access, nor does it imply that all firms use this technology to full capacity. On the contrary, it is allowed that, for various reasons, each particular firm may not be on the frontier. The reasons for “deviations” from the technology frontier are well-explained by asymmetric information and behavioral economics theories and are documented in many empirical studies.
Cambridge University Press has published a number of successful books that focus on topics related to ours: Chambers (1988), Färe et al. (1994b), Chambers and Quiggin (2000), Kumbhakar and Lovell (2000), Ray (2004), Balk (2008), and Grifell-Tatjé and Lovell (2015). These books – and an increasing number of articles related to production analysis, published in top international journals in economics, econometrics, and operations research – suggest a growing interest in the academic and business audience on the subject.
Our book is meant to complement and expand selected topics covered in the above-mentioned books, as well as the volume edited by Fried et al. (2008) and the edited volume by Grifell-Tatjé et al. (2018), and addresses issues germane to productivity analysis that would be of interest to a broad audience. Our book provides something genuinely unique to the literature: a comprehensive textbook on the measurement of productivity and efficiency, with deep coverage of both its theoretical underpinnings as well as its empirical implementation and a coverage of recent developments in the area. A distinctive feature of our book is that it presents a wide array of theoretical and empirical methods utilized by researchers and practitioners who study productivity issues. Our book is intended to be a relatively self-contained textbook that can be used in any graduate course devoted to econometrics and production analysis, of use also to upper-level undergraduate students in economics and in production analysis, and to analysts in government and in private business whose research or business decisions require reasoned analytical foundations and reliable and feasible empirical approaches to assessing the productivity and efficiency of their organizations and enterprises. We provide an integrated and synthesized treatment of the topics we cover. We have covered some topics in greater depth, some at a broader scope, but at all times with the same theme of motivating the material with an applied orientation.
Our book is structured in such a way that it can be used as a textbook for (instructed or self-oriented) academics and business consultants in the area of quantitative analysis of productivity of economic systems (firms, industries, regions, countries, etc.). In addition, some parts of this book can be used for short, intensive courses or supplements to longer courses on productivity and other topics, such as empirical industrial organization.
The economic performance of firms and the economic growth of countries have never been more important than in this historical epoch, both in terms of how labor services are being replaced with capital services and artificial intelligence algorithms and how the divergence in compensation to labor and capital services appears to be increasing. The outcome of such a dynamic has not only economic implications but also serious political repercussions, especially in our increasingly globalized and interconnected world.
If effective public policies can be formulated to address the many aspects of economic growth that impact individual welfare, they will be based on the methods and analyses we have detailed in this textbook as well as in the further developments of these methods and analyses as the science of productivity and efficiency measurement continues to advance. Public policies that address income growth and income inequality will need to be economically as well as politically sustainable. This means that they will need to take advantage of market mechanisms and the compatible incentives that drive the functioning and operation of competitive markets. Decision-makers will need to recognize that, for a variety of reasons, firms may not be forced by market mechanisms to make decisions on economic allocations that are optimal relative to standard economic definitions of optimizing behaviors. Whether this is due to long-standing market failures, protected market niches, institutional or other external constraints is not as important as the need for optimizing behaviors to be tested as the alternative hypothesis, not stated as the null hypothesis when conducting economic research. Researchers and scholars will need to understand how the economic well-being of individuals and the wealth of nations evolve and devolve. Practitioners will need to be able to implement methods that allow them to construct the economic measures that tell them if there is an improvement in economic well-being. As with any meaningful empirical measurement of an important public phenomenon such as growth in per capita income levels or growth in a country's income and the distribution of this growth among economic agents, public resources will need to be brought to bear to make accurate measurement possible and transparent.
In this chapter we briefly discuss some of the issues that arise when using standard index numbers as input quantity or price measures, as well as particular data sets that can be used in productivity research. In regard to the latter, we focus first on the World KLEMS project data and recent studies using it. These studies are based on modern approaches to productivity measurement using largely neoclassical approaches that assume perfectly competitive markets and frontier behaviors by firms, industries, and countries. We discuss in our summary of these papers how concepts we have put forth in our book speak to the topics and approaches used in these studies and how, in many ways, their frameworks and methods are closely aligned with modeling approaches and scenarios we have discussed in our earlier chapters. We then provide a short description of many other public use datasets and information on how to access them. Of course, it is important to be able to have accessible and easy to use software to analyze such data using methods we have discussed in this book. The software is detailed in the last section of this, our concluding chapter.
DATA MEASUREMENT ISSUES
The accurate modeling and measurement of the productivity growth determinants and their contributions in an aggregate economy, in its component industries, and in particular firms, has advanced considerably since the Jorgenson and Griliches (1967) seminal treatise on the measurement problems inherent in assessing productivity growth. However, the problems that Jorgenson and Griliches pointed out over 50 years ago are still with us, as noted in Chapter 4. Although major improvements in data collection and methodology have been incorporated in government and private-sector data collection protocols through the efforts of Jorgenson and Griliches and their many collaborators and colleagues, variations in the quality of data still affect the measurement and analysis of productivity growth. Such issues tend not to be discussed in applied work. Griliches (1994) summarized the potential measurement issues pertaining to productivity analysis, listing the following general problems and questions: 1. Coverage issues, definition of the borders of a sector, and the relevant concept of “output” for it. For example, is illegal activity included? Are pollution damages counted against the “output” of an industry?; 2. The difficulty in measuring “real” output over time as prices and the quality of output change; 3.
So far, we have focused on measuring the efficiency of an individualproduction or decision-making unit (firm, country, etc.) relative to a frontier consistent with a behavior of this unit. In practice, researchers are often also interested in measuring the efficiency of a groupof similar units (entire industry of firms, region of countries) or particular types of these units (e.g., public firms vs. private firms, etc.) within such groups. Even when the focus is on the efficiency of individual units, at the end of the day, researchers might want to have just one or several aggregate numbers that summarize the results. This is especially important when the number of individual units is large and each of them cannot be published or easily comprehended. But, how can we aggregate? Can we just take an average? Which one: arithmetic, geometric, harmonic? Shall it be a weighted or a non-weighted average? The goal of this chapter is to outline the recently obtained and practically useful results of previous studies to answer these imperative questions.
THE AGGREGATION PROBLEM
The problem of constructing a group measure or a group score from individual analogues is an aggregation question, which has been recently studied in a number of works. The most important question here is the choice of aggregation weights. To illustrate the point, consider a hypothetical example (adapted from Simar and Zelenyuk, 2007) of an industry consisting of four firms, two firms in each of two types, whose efficiency and “an economic weight” (whatever that might be) are summarized in Table 5.1. Here, if a researcher were to use the simple (equally weighted) arithmetic average then group A and group Z are, on average, equally efficient. Note however that the efficiency scores are “standardized” so that they are between 0 and 1 and so they disregard the relative weights of the firms that attained these scores. If another researcher wanted to use a weighted arithmetic average, then a dramatically different conclusion might be reached – depending on the weighting scheme. For the example, in Table 5.1, group A has a higher-weighted average efficiency than that of group Z, yet the industry average could still be closer to the score of group Z if its group weight dominates the weight of group A (e.g., if their weight in the industry is 90 percent as in the table).
As the book has pointed out in its earliest chapters, economic theory provides the most reasoned, and often the most powerful and leveraged, guidance for econometric modeling of productivity. The primal and dual relationships that are specified and estimated by functional representations in the form of the production, cost, revenue, and profit functions derive their interpretability from the regularity conditions that were utilized in specifying the production sets, distance functions, and in deriving cost, revenue, and profit functions. These regularity conditions are often difficult to impose with many of the flexible parametric functional forms we discussed in Chapter 6 and may be even more difficult to impose when the functional relationships are specified nonparametrically using kernel smoothers or other classical nonparametric methods. In the production setting monotonicity is often required, analogous to its requirement in models with rational preferences. Concavity of production functions have analogs in convex preferences and risk aversion in utility theory. Demand theory results in downward sloping demand curves for normal goods (Matzkin, 1991; Lewbel, 2010; Blundell et al., 2012), while production theory and duality provide us with implications of profit-maximizing behavior that require profit functions to be concave in output prices. Cost minimization yields cost functions that are monotonically increasing and concave in input prices. Auction theory and optimal bidding strategies that vary across auction formats and bidders’ preferences are based on monotonicity in bidders valuations. Derivative pricing models are highly leveraged on convex function estimation (Broadie et al., 2000; Aıt-Sahalia and Duarte, 2003; Yatchew and Härdle, 2006). Such considerations are ubiquitous in economics and it is essential that we address them in the context of the topic that our book in part endeavors to address, empirical productivity analysis.
In this chapter, we discuss several methods to deal with estimation of the primal production function utilizing semi- and nonparametric econometric specifications under monotonicity and curvature constraints. General reviews of this material can be found in Matzkin (1994) and Yatchew (2003, Chapter 6). Work that speaks to relatively recent extensions can be found in Hall and Huang (2001), Groeneboom et al. (2001), Horowitz et al. (2004), Carroll et al. (2011), Shively et al. (2011), Blundell et al. (2012), and Pya and Wood (2015), among others.
In previous chapters we were focusing on measuring production efficiency in various ways. We now know that one should use the technical efficiency measure if one is concerned with how well the technology potential is used (yet, recall that one still needs to choose an appropriate orientation of measurement – input or output or a mix of these). Furthermore, we learned that one should use cost or revenue (or profit) efficiency if, in addition, one is interested in how well different inputs or outputs (or both) are chosen or allocated with respect to the corresponding prices. The goal of this chapter is to discuss a closely related and, in fact, more general concept – the concept of productivity.
A roadmap for this chapter is useful. We will start by clarifying the differences and relationships between the two main themes of our book: efficiency, which we explored in detail in previous chapters, and productivity, which we will focus on in this chapter. We then consider different approaches to productivity measurement. We will start with the classical growth accounting approach and then move on to the economic approach using index numbers, where we will first consider price indexes, then quantity indexes and then productivity indexes. We also examine some of their decompositions and the relationships among them. After considering a wide range of approaches within the economic approach to index numbers, we will then show that the growth accounting approach can be considered as a restrictive special case. We will finish the chapter with a discussion of transitivity (or circularity) of indexes, what it means in general and for indexes in particular, and how desirable or critical and restrictive this particular property is for an economic index number. We then discuss the sacrifices one must make in order to preserve transitivity and how to mitigate problems with the index number approach when transitivity is not imposed. We conclude with brief remarks on the literature, which will be further discussed in Chapter 7.
PRODUCTIVITY VS. EFFICIENCY
While a lot has been done on efficiency measurement in production, it is a relatively modern area in economics and long before its academic origins, people already used, and still use, the notion of productivity.