To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To determine the extent that appropriate personal protective equipment (PPE), per CDC guidance, was used during the COVID-19 pandemic by health care personnel (HCP) in Louisiana in five clinical settings.
An online questionnaire was distributed to the LA Nursery registry. Appropriate use of PPE in each of the five clinical scenarios was defined by the authors based on CDC guidelines. The scenarios ranged from communal hospital space to carrying out Aerosol Generating Procedures (AGP). 1760 HCP participated between June-July 2020.
The average adherence in LA was lowest for carrying out AGPs scenario at 39.5% compliance and highest for patient contact when COVID-19 not suspected scenario at 82.8% compliance. Adherence among parishes varied widely. Commentary to suggest a shortage of PPE supply and the practice of re-using PPE was strong.
Use of appropriate PPE varied by setting. It was higher in scenarios where only face masks (or respirators) were the standard (i.e., community hospital or when COVID-19 not suspected) and lower in scenarios where additional PPE (e.g., gloves, eye protection, and isolation gown) were required.
As HCP are at the forefront of efforts to contain the coronavirus, the factors underlying variable adherence to CDC protocols in LA need to be further analyzed and addressed.
To probe the limits of attention raising through form-focused instruction, second-language research must adapt to the needs of a technologically driven learning environment. In this study, we used a randomized control design to investigate the effect of captioned media on the learning of vocabulary and grammar in L2 Spanish (n = 369 learners). Through four data-collection sessions, participants were presented with a grammar-lesson video and a multimodal video with one of three captioning formats: textually enhanced target vocabulary, textually enhanced target grammar, or no captioning. Results show strong immediate effects of captioning on target vocabulary, with additional effects of captioning on some, but not all, target-grammar structures. The findings demonstrate that (a) the learning of some grammatical structures is more conducive to captioning than others, and (b) there is space for future investigation into the factors that may influence the effectiveness of multimodal interventions, such as prior knowledge or frequency of use.
Protecting frontline health care workers with personal protective equipment (PPE) is critical during the coronavirus disease (COVID-19) pandemic. Through an online survey, we demonstrated variable adherence to the Centers for Disease Control and Prevention (CDC) PPE guidelines among health care personnel (HCP).
CDC guidelines for optimal and acceptable PPE usage in common situations faced by frontline health care workers were referenced to create a short online survey. The survey was distributed to national, statewide, and local professional organizations across the United States and to HCP, using a snowball sampling technique. Responses were collected between June 15 and July 17, 2020.
Responses totaling 2245 were received from doctors, nurses, midwives, paramedics, and medical technicians in 44 states. Eight states with n > 20 (Arizona, California, Colorado, Louisiana, Oregon, South Carolina, Texas, and Washington) and a total of 436 responses are included in the quantitative analysis. Adherence to CDC guidelines was observed to be highest in the scenario of patient contact when COVID-19 was not suspected (86.47%) and lowest when carrying out aerosol generating procedures (AGPs) (42.47%).
Further research is urgently needed to identify the reasons underlying variability between professions and regions to pinpoint strategies for maximizing adherence and improving the safety of HCPs.
The updated common rule, for human subjects research, requires that consents “begin with a ‘concise and focused’ presentation of the key information that will most likely help someone make a decision about whether to participate in a study” (Menikoff, Kaneshiro, Pritchard. The New England Journal of Medicine. 2017; 376(7): 613–615.). We utilized a community-engaged technology development approach to inform feature options within the REDCap software platform centered around collection and storage of electronic consent (eConsent) to address issues of transparency, clinical trial efficiency, and regulatory compliance for informed consent (Harris, et al. Journal of Biomedical Informatics 2009; 42(2): 377–381.). eConsent may also improve recruitment and retention in clinical research studies by addressing: (1) barriers for accessing rural populations by facilitating remote consent and (2) cultural and literacy barriers by including optional explanatory material (e.g., defining terms by hovering over them with the cursor) or the choice of displaying different videos/images based on participant’s race, ethnicity, or educational level (Phillippi, et al. Journal of Obstetric, Gynecologic, & Neonatal Nursing. 2018; 47(4): 529–534.).
We developed and pilot tested our eConsent framework to provide a personalized consent experience whereby users are guided through a consent document that utilizes avatars, contextual glossary information supplements, and videos, to facilitate communication of information.
The eConsent framework includes a portfolio of eight features, reviewed by community stakeholders, and tested at two academic medical centers.
Early adoption and utilization of this eConsent framework have demonstrated acceptability. Next steps will emphasize testing efficacy of features to improve participant engagement with the consent process.
Sharing information between different countries is key for developing sustainable solutions to environmental change. Coastal wetlands in the Gulf of Mexico are suffering significant environmental and human-related threats. Working across national boundaries, this research project brings together scientists, specialists and local communities from Cuba and the USA. While important advances have been made in strengthening collaborations, important obstacles remain in terms of international policy constraints, different institutional and academic cultures and technology. Overcoming these limitations is essential to formulating a comprehensive understanding of the challenges that coastal socioecological systems are facing now and into the future.
Between 2010 and 2019 the international health care organization Partners In Health (PIH) and its sister organization Zanmi Lasante (ZL) mounted a long-term response to the 2010 Haiti earthquake, focused on mental health. Over that time, implementing a Theory of Change developed in 2012, the organization successfully developed a comprehensive, sustained community mental health system in Haiti's Central Plateau and Artibonite departments, directly serving a catchment area of 1.5 million people through multiple diagnosis-specific care pathways. The resulting ZL mental health system delivered 28 184 patient visits and served 6305 discrete patients at ZL facilities between January 2016 and September 2019. The experience of developing a system of mental health services in Haiti that currently provides ongoing care to thousands of people serves as a case study in major challenges involved in global mental health delivery. The essential components of the effort to develop and sustain this community mental health system are summarized.
One of the most popular approaches in the theoretical measurement and empirical estimation of the efficiency of various economic systems is known as Data Envelopment Analysis, abbreviated as DEA. This approach is rooted in and cohesive with theoretical economic modeling via the so-called Activity Analysis Models and is estimated via the powerful linear programming approach.
In this chapter, we consider a variety of models that can be used to estimate particular types of technologies: constant, nonincreasing and variable returns to scale, convex and non-convex technologies. This chapter does not exhaust everything that has been suggested in the literature – fulfilling such a task would be practically impossible in one chapter. The goal is more modest, yet practically valuable: we focus on the most popular methods and consider their “step-by-step construction,” intuition, some of the most important properties, some interesting variations and modifications, etc. We pay attention to aspects that we consider very useful for a reader to advance in his/her own research and, possibly, advance the frontier of the research.
INTRODUCTION TO ACTIVITY ANALYSIS MODELING
An economist's approach to thinking about nonparametric efficiency measurement can be viewed through the so-called Activity Analysis Models – a way of mathematically modeling production relationships. An activity analysis model (AAM) can be defined as a set of mathematical formulations designed to mimic a technology set from the observed data of some real-world production process of interest. The best way to understand such modeling is to actually build a few AAMs.
There are two fundamental assumptions behind most AAMs. The first fundamental assumption we will always make for AAMs in this book is that all decision making units (DMUs) have access to the same technology (which can be characterized by the technology set T that satisfies the main regularity axioms; see Chapter 1). This assumption is important to justify the estimation of one frontier from the full sample – often called the (observed) best practice frontier for the population represented by that sample. Note that this assumption does not imply that all firms have the same access, nor does it imply that all firms use this technology to full capacity. On the contrary, it is allowed that, for various reasons, each particular firm may not be on the frontier. The reasons for “deviations” from the technology frontier are well-explained by asymmetric information and behavioral economics theories and are documented in many empirical studies.
Cambridge University Press has published a number of successful books that focus on topics related to ours: Chambers (1988), Färe et al. (1994b), Chambers and Quiggin (2000), Kumbhakar and Lovell (2000), Ray (2004), Balk (2008), and Grifell-Tatjé and Lovell (2015). These books – and an increasing number of articles related to production analysis, published in top international journals in economics, econometrics, and operations research – suggest a growing interest in the academic and business audience on the subject.
Our book is meant to complement and expand selected topics covered in the above-mentioned books, as well as the volume edited by Fried et al. (2008) and the edited volume by Grifell-Tatjé et al. (2018), and addresses issues germane to productivity analysis that would be of interest to a broad audience. Our book provides something genuinely unique to the literature: a comprehensive textbook on the measurement of productivity and efficiency, with deep coverage of both its theoretical underpinnings as well as its empirical implementation and a coverage of recent developments in the area. A distinctive feature of our book is that it presents a wide array of theoretical and empirical methods utilized by researchers and practitioners who study productivity issues. Our book is intended to be a relatively self-contained textbook that can be used in any graduate course devoted to econometrics and production analysis, of use also to upper-level undergraduate students in economics and in production analysis, and to analysts in government and in private business whose research or business decisions require reasoned analytical foundations and reliable and feasible empirical approaches to assessing the productivity and efficiency of their organizations and enterprises. We provide an integrated and synthesized treatment of the topics we cover. We have covered some topics in greater depth, some at a broader scope, but at all times with the same theme of motivating the material with an applied orientation.
Our book is structured in such a way that it can be used as a textbook for (instructed or self-oriented) academics and business consultants in the area of quantitative analysis of productivity of economic systems (firms, industries, regions, countries, etc.). In addition, some parts of this book can be used for short, intensive courses or supplements to longer courses on productivity and other topics, such as empirical industrial organization.
The economic performance of firms and the economic growth of countries have never been more important than in this historical epoch, both in terms of how labor services are being replaced with capital services and artificial intelligence algorithms and how the divergence in compensation to labor and capital services appears to be increasing. The outcome of such a dynamic has not only economic implications but also serious political repercussions, especially in our increasingly globalized and interconnected world.
If effective public policies can be formulated to address the many aspects of economic growth that impact individual welfare, they will be based on the methods and analyses we have detailed in this textbook as well as in the further developments of these methods and analyses as the science of productivity and efficiency measurement continues to advance. Public policies that address income growth and income inequality will need to be economically as well as politically sustainable. This means that they will need to take advantage of market mechanisms and the compatible incentives that drive the functioning and operation of competitive markets. Decision-makers will need to recognize that, for a variety of reasons, firms may not be forced by market mechanisms to make decisions on economic allocations that are optimal relative to standard economic definitions of optimizing behaviors. Whether this is due to long-standing market failures, protected market niches, institutional or other external constraints is not as important as the need for optimizing behaviors to be tested as the alternative hypothesis, not stated as the null hypothesis when conducting economic research. Researchers and scholars will need to understand how the economic well-being of individuals and the wealth of nations evolve and devolve. Practitioners will need to be able to implement methods that allow them to construct the economic measures that tell them if there is an improvement in economic well-being. As with any meaningful empirical measurement of an important public phenomenon such as growth in per capita income levels or growth in a country's income and the distribution of this growth among economic agents, public resources will need to be brought to bear to make accurate measurement possible and transparent.
In this chapter we briefly discuss some of the issues that arise when using standard index numbers as input quantity or price measures, as well as particular data sets that can be used in productivity research. In regard to the latter, we focus first on the World KLEMS project data and recent studies using it. These studies are based on modern approaches to productivity measurement using largely neoclassical approaches that assume perfectly competitive markets and frontier behaviors by firms, industries, and countries. We discuss in our summary of these papers how concepts we have put forth in our book speak to the topics and approaches used in these studies and how, in many ways, their frameworks and methods are closely aligned with modeling approaches and scenarios we have discussed in our earlier chapters. We then provide a short description of many other public use datasets and information on how to access them. Of course, it is important to be able to have accessible and easy to use software to analyze such data using methods we have discussed in this book. The software is detailed in the last section of this, our concluding chapter.
DATA MEASUREMENT ISSUES
The accurate modeling and measurement of the productivity growth determinants and their contributions in an aggregate economy, in its component industries, and in particular firms, has advanced considerably since the Jorgenson and Griliches (1967) seminal treatise on the measurement problems inherent in assessing productivity growth. However, the problems that Jorgenson and Griliches pointed out over 50 years ago are still with us, as noted in Chapter 4. Although major improvements in data collection and methodology have been incorporated in government and private-sector data collection protocols through the efforts of Jorgenson and Griliches and their many collaborators and colleagues, variations in the quality of data still affect the measurement and analysis of productivity growth. Such issues tend not to be discussed in applied work. Griliches (1994) summarized the potential measurement issues pertaining to productivity analysis, listing the following general problems and questions: 1. Coverage issues, definition of the borders of a sector, and the relevant concept of “output” for it. For example, is illegal activity included? Are pollution damages counted against the “output” of an industry?; 2. The difficulty in measuring “real” output over time as prices and the quality of output change; 3.
So far, we have focused on measuring the efficiency of an individualproduction or decision-making unit (firm, country, etc.) relative to a frontier consistent with a behavior of this unit. In practice, researchers are often also interested in measuring the efficiency of a groupof similar units (entire industry of firms, region of countries) or particular types of these units (e.g., public firms vs. private firms, etc.) within such groups. Even when the focus is on the efficiency of individual units, at the end of the day, researchers might want to have just one or several aggregate numbers that summarize the results. This is especially important when the number of individual units is large and each of them cannot be published or easily comprehended. But, how can we aggregate? Can we just take an average? Which one: arithmetic, geometric, harmonic? Shall it be a weighted or a non-weighted average? The goal of this chapter is to outline the recently obtained and practically useful results of previous studies to answer these imperative questions.
THE AGGREGATION PROBLEM
The problem of constructing a group measure or a group score from individual analogues is an aggregation question, which has been recently studied in a number of works. The most important question here is the choice of aggregation weights. To illustrate the point, consider a hypothetical example (adapted from Simar and Zelenyuk, 2007) of an industry consisting of four firms, two firms in each of two types, whose efficiency and “an economic weight” (whatever that might be) are summarized in Table 5.1. Here, if a researcher were to use the simple (equally weighted) arithmetic average then group A and group Z are, on average, equally efficient. Note however that the efficiency scores are “standardized” so that they are between 0 and 1 and so they disregard the relative weights of the firms that attained these scores. If another researcher wanted to use a weighted arithmetic average, then a dramatically different conclusion might be reached – depending on the weighting scheme. For the example, in Table 5.1, group A has a higher-weighted average efficiency than that of group Z, yet the industry average could still be closer to the score of group Z if its group weight dominates the weight of group A (e.g., if their weight in the industry is 90 percent as in the table).