To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Protecting frontline healthcare workers with personal protective equipment (PPE) is critical during the COVID pandemic. Through an online survey, we demonstrated variable adherence to the Center for Disease Control and Prevention’s (CDC) PPE guidelines among health care personnel (HCP).
CDC guidelines for optimal and acceptable PPE usage in common situations faced by frontline healthcare workers were referenced to create a short online survey. The survey was distributed to national, statewide, and local professional organizations across the United States and to HCP using a snowball sampling technique. Responses were collected between June 15 and July 17, 2020.
2245 responses were received from doctors, nurses, midwives, paramedics, and medical technicians in 44 states. Eight states with n>20 (Arizona, California, Colorado, Louisiana, Oregon, South Carolina, Texas, and Washington) and a total of 436 responses are included in the quantitative analysis. Adherence to CDC guidelines was observed to be highest in the scenario of patient contact when COVID was not suspected (86.47%) and lowest when carrying out aerosol generating procedures (AGPs) (42.47%).
Further research is urgently needed to identify the reasons underlying variability between professions and regions to pinpoint strategies for maximizing adherence and improving the safety of HCPs.
The updated common rule, for human subjects research, requires that consents “begin with a ‘concise and focused’ presentation of the key information that will most likely help someone make a decision about whether to participate in a study” (Menikoff, Kaneshiro, Pritchard. The New England Journal of Medicine. 2017; 376(7): 613–615.). We utilized a community-engaged technology development approach to inform feature options within the REDCap software platform centered around collection and storage of electronic consent (eConsent) to address issues of transparency, clinical trial efficiency, and regulatory compliance for informed consent (Harris, et al. Journal of Biomedical Informatics 2009; 42(2): 377–381.). eConsent may also improve recruitment and retention in clinical research studies by addressing: (1) barriers for accessing rural populations by facilitating remote consent and (2) cultural and literacy barriers by including optional explanatory material (e.g., defining terms by hovering over them with the cursor) or the choice of displaying different videos/images based on participant’s race, ethnicity, or educational level (Phillippi, et al. Journal of Obstetric, Gynecologic, & Neonatal Nursing. 2018; 47(4): 529–534.).
We developed and pilot tested our eConsent framework to provide a personalized consent experience whereby users are guided through a consent document that utilizes avatars, contextual glossary information supplements, and videos, to facilitate communication of information.
The eConsent framework includes a portfolio of eight features, reviewed by community stakeholders, and tested at two academic medical centers.
Early adoption and utilization of this eConsent framework have demonstrated acceptability. Next steps will emphasize testing efficacy of features to improve participant engagement with the consent process.
Sharing information between different countries is key for developing sustainable solutions to environmental change. Coastal wetlands in the Gulf of Mexico are suffering significant environmental and human-related threats. Working across national boundaries, this research project brings together scientists, specialists and local communities from Cuba and the USA. While important advances have been made in strengthening collaborations, important obstacles remain in terms of international policy constraints, different institutional and academic cultures and technology. Overcoming these limitations is essential to formulating a comprehensive understanding of the challenges that coastal socioecological systems are facing now and into the future.
Between 2010 and 2019 the international health care organization Partners In Health (PIH) and its sister organization Zanmi Lasante (ZL) mounted a long-term response to the 2010 Haiti earthquake, focused on mental health. Over that time, implementing a Theory of Change developed in 2012, the organization successfully developed a comprehensive, sustained community mental health system in Haiti's Central Plateau and Artibonite departments, directly serving a catchment area of 1.5 million people through multiple diagnosis-specific care pathways. The resulting ZL mental health system delivered 28 184 patient visits and served 6305 discrete patients at ZL facilities between January 2016 and September 2019. The experience of developing a system of mental health services in Haiti that currently provides ongoing care to thousands of people serves as a case study in major challenges involved in global mental health delivery. The essential components of the effort to develop and sustain this community mental health system are summarized.
One of the most popular approaches in the theoretical measurement and empirical estimation of the efficiency of various economic systems is known as Data Envelopment Analysis, abbreviated as DEA. This approach is rooted in and cohesive with theoretical economic modeling via the so-called Activity Analysis Models and is estimated via the powerful linear programming approach.
In this chapter, we consider a variety of models that can be used to estimate particular types of technologies: constant, nonincreasing and variable returns to scale, convex and non-convex technologies. This chapter does not exhaust everything that has been suggested in the literature – fulfilling such a task would be practically impossible in one chapter. The goal is more modest, yet practically valuable: we focus on the most popular methods and consider their “step-by-step construction,” intuition, some of the most important properties, some interesting variations and modifications, etc. We pay attention to aspects that we consider very useful for a reader to advance in his/her own research and, possibly, advance the frontier of the research.
INTRODUCTION TO ACTIVITY ANALYSIS MODELING
An economist's approach to thinking about nonparametric efficiency measurement can be viewed through the so-called Activity Analysis Models – a way of mathematically modeling production relationships. An activity analysis model (AAM) can be defined as a set of mathematical formulations designed to mimic a technology set from the observed data of some real-world production process of interest. The best way to understand such modeling is to actually build a few AAMs.
There are two fundamental assumptions behind most AAMs. The first fundamental assumption we will always make for AAMs in this book is that all decision making units (DMUs) have access to the same technology (which can be characterized by the technology set T that satisfies the main regularity axioms; see Chapter 1). This assumption is important to justify the estimation of one frontier from the full sample – often called the (observed) best practice frontier for the population represented by that sample. Note that this assumption does not imply that all firms have the same access, nor does it imply that all firms use this technology to full capacity. On the contrary, it is allowed that, for various reasons, each particular firm may not be on the frontier. The reasons for “deviations” from the technology frontier are well-explained by asymmetric information and behavioral economics theories and are documented in many empirical studies.
Cambridge University Press has published a number of successful books that focus on topics related to ours: Chambers (1988), Färe et al. (1994b), Chambers and Quiggin (2000), Kumbhakar and Lovell (2000), Ray (2004), Balk (2008), and Grifell-Tatjé and Lovell (2015). These books – and an increasing number of articles related to production analysis, published in top international journals in economics, econometrics, and operations research – suggest a growing interest in the academic and business audience on the subject.
Our book is meant to complement and expand selected topics covered in the above-mentioned books, as well as the volume edited by Fried et al. (2008) and the edited volume by Grifell-Tatjé et al. (2018), and addresses issues germane to productivity analysis that would be of interest to a broad audience. Our book provides something genuinely unique to the literature: a comprehensive textbook on the measurement of productivity and efficiency, with deep coverage of both its theoretical underpinnings as well as its empirical implementation and a coverage of recent developments in the area. A distinctive feature of our book is that it presents a wide array of theoretical and empirical methods utilized by researchers and practitioners who study productivity issues. Our book is intended to be a relatively self-contained textbook that can be used in any graduate course devoted to econometrics and production analysis, of use also to upper-level undergraduate students in economics and in production analysis, and to analysts in government and in private business whose research or business decisions require reasoned analytical foundations and reliable and feasible empirical approaches to assessing the productivity and efficiency of their organizations and enterprises. We provide an integrated and synthesized treatment of the topics we cover. We have covered some topics in greater depth, some at a broader scope, but at all times with the same theme of motivating the material with an applied orientation.
Our book is structured in such a way that it can be used as a textbook for (instructed or self-oriented) academics and business consultants in the area of quantitative analysis of productivity of economic systems (firms, industries, regions, countries, etc.). In addition, some parts of this book can be used for short, intensive courses or supplements to longer courses on productivity and other topics, such as empirical industrial organization.
The economic performance of firms and the economic growth of countries have never been more important than in this historical epoch, both in terms of how labor services are being replaced with capital services and artificial intelligence algorithms and how the divergence in compensation to labor and capital services appears to be increasing. The outcome of such a dynamic has not only economic implications but also serious political repercussions, especially in our increasingly globalized and interconnected world.
If effective public policies can be formulated to address the many aspects of economic growth that impact individual welfare, they will be based on the methods and analyses we have detailed in this textbook as well as in the further developments of these methods and analyses as the science of productivity and efficiency measurement continues to advance. Public policies that address income growth and income inequality will need to be economically as well as politically sustainable. This means that they will need to take advantage of market mechanisms and the compatible incentives that drive the functioning and operation of competitive markets. Decision-makers will need to recognize that, for a variety of reasons, firms may not be forced by market mechanisms to make decisions on economic allocations that are optimal relative to standard economic definitions of optimizing behaviors. Whether this is due to long-standing market failures, protected market niches, institutional or other external constraints is not as important as the need for optimizing behaviors to be tested as the alternative hypothesis, not stated as the null hypothesis when conducting economic research. Researchers and scholars will need to understand how the economic well-being of individuals and the wealth of nations evolve and devolve. Practitioners will need to be able to implement methods that allow them to construct the economic measures that tell them if there is an improvement in economic well-being. As with any meaningful empirical measurement of an important public phenomenon such as growth in per capita income levels or growth in a country's income and the distribution of this growth among economic agents, public resources will need to be brought to bear to make accurate measurement possible and transparent.
In this chapter we briefly discuss some of the issues that arise when using standard index numbers as input quantity or price measures, as well as particular data sets that can be used in productivity research. In regard to the latter, we focus first on the World KLEMS project data and recent studies using it. These studies are based on modern approaches to productivity measurement using largely neoclassical approaches that assume perfectly competitive markets and frontier behaviors by firms, industries, and countries. We discuss in our summary of these papers how concepts we have put forth in our book speak to the topics and approaches used in these studies and how, in many ways, their frameworks and methods are closely aligned with modeling approaches and scenarios we have discussed in our earlier chapters. We then provide a short description of many other public use datasets and information on how to access them. Of course, it is important to be able to have accessible and easy to use software to analyze such data using methods we have discussed in this book. The software is detailed in the last section of this, our concluding chapter.
DATA MEASUREMENT ISSUES
The accurate modeling and measurement of the productivity growth determinants and their contributions in an aggregate economy, in its component industries, and in particular firms, has advanced considerably since the Jorgenson and Griliches (1967) seminal treatise on the measurement problems inherent in assessing productivity growth. However, the problems that Jorgenson and Griliches pointed out over 50 years ago are still with us, as noted in Chapter 4. Although major improvements in data collection and methodology have been incorporated in government and private-sector data collection protocols through the efforts of Jorgenson and Griliches and their many collaborators and colleagues, variations in the quality of data still affect the measurement and analysis of productivity growth. Such issues tend not to be discussed in applied work. Griliches (1994) summarized the potential measurement issues pertaining to productivity analysis, listing the following general problems and questions: 1. Coverage issues, definition of the borders of a sector, and the relevant concept of “output” for it. For example, is illegal activity included? Are pollution damages counted against the “output” of an industry?; 2. The difficulty in measuring “real” output over time as prices and the quality of output change; 3.
So far, we have focused on measuring the efficiency of an individualproduction or decision-making unit (firm, country, etc.) relative to a frontier consistent with a behavior of this unit. In practice, researchers are often also interested in measuring the efficiency of a groupof similar units (entire industry of firms, region of countries) or particular types of these units (e.g., public firms vs. private firms, etc.) within such groups. Even when the focus is on the efficiency of individual units, at the end of the day, researchers might want to have just one or several aggregate numbers that summarize the results. This is especially important when the number of individual units is large and each of them cannot be published or easily comprehended. But, how can we aggregate? Can we just take an average? Which one: arithmetic, geometric, harmonic? Shall it be a weighted or a non-weighted average? The goal of this chapter is to outline the recently obtained and practically useful results of previous studies to answer these imperative questions.
THE AGGREGATION PROBLEM
The problem of constructing a group measure or a group score from individual analogues is an aggregation question, which has been recently studied in a number of works. The most important question here is the choice of aggregation weights. To illustrate the point, consider a hypothetical example (adapted from Simar and Zelenyuk, 2007) of an industry consisting of four firms, two firms in each of two types, whose efficiency and “an economic weight” (whatever that might be) are summarized in Table 5.1. Here, if a researcher were to use the simple (equally weighted) arithmetic average then group A and group Z are, on average, equally efficient. Note however that the efficiency scores are “standardized” so that they are between 0 and 1 and so they disregard the relative weights of the firms that attained these scores. If another researcher wanted to use a weighted arithmetic average, then a dramatically different conclusion might be reached – depending on the weighting scheme. For the example, in Table 5.1, group A has a higher-weighted average efficiency than that of group Z, yet the industry average could still be closer to the score of group Z if its group weight dominates the weight of group A (e.g., if their weight in the industry is 90 percent as in the table).
As the book has pointed out in its earliest chapters, economic theory provides the most reasoned, and often the most powerful and leveraged, guidance for econometric modeling of productivity. The primal and dual relationships that are specified and estimated by functional representations in the form of the production, cost, revenue, and profit functions derive their interpretability from the regularity conditions that were utilized in specifying the production sets, distance functions, and in deriving cost, revenue, and profit functions. These regularity conditions are often difficult to impose with many of the flexible parametric functional forms we discussed in Chapter 6 and may be even more difficult to impose when the functional relationships are specified nonparametrically using kernel smoothers or other classical nonparametric methods. In the production setting monotonicity is often required, analogous to its requirement in models with rational preferences. Concavity of production functions have analogs in convex preferences and risk aversion in utility theory. Demand theory results in downward sloping demand curves for normal goods (Matzkin, 1991; Lewbel, 2010; Blundell et al., 2012), while production theory and duality provide us with implications of profit-maximizing behavior that require profit functions to be concave in output prices. Cost minimization yields cost functions that are monotonically increasing and concave in input prices. Auction theory and optimal bidding strategies that vary across auction formats and bidders’ preferences are based on monotonicity in bidders valuations. Derivative pricing models are highly leveraged on convex function estimation (Broadie et al., 2000; Aıt-Sahalia and Duarte, 2003; Yatchew and Härdle, 2006). Such considerations are ubiquitous in economics and it is essential that we address them in the context of the topic that our book in part endeavors to address, empirical productivity analysis.
In this chapter, we discuss several methods to deal with estimation of the primal production function utilizing semi- and nonparametric econometric specifications under monotonicity and curvature constraints. General reviews of this material can be found in Matzkin (1994) and Yatchew (2003, Chapter 6). Work that speaks to relatively recent extensions can be found in Hall and Huang (2001), Groeneboom et al. (2001), Horowitz et al. (2004), Carroll et al. (2011), Shively et al. (2011), Blundell et al. (2012), and Pya and Wood (2015), among others.