To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent decades have seen large tax increases in Latin America. The conventional wisdom that Latin American tax systems generate too little revenue seems harder to sustain today than in the past. What continues to be striking about the region’s tax burdens, however, is the great disparity between them. This book sheds light on this question through a comparison of Argentina, Brazil, Chile and Mexico. It argues that tax burden variance reflects the impact of historical episodes of redistribution that threatened private property. Where they occurred, such episodes impeded future taxation by prompting economic elites and social conservatives to organize to defend their interests, thus forging strong, enduring anti-statist blocs. These blocs hindered taxation both directly, by combatting efforts to boost revenue, and indirectly, by undermining statist actors, especially labor unions. This introductory chapter consists of five sections: the first provides an overview of Latin American tax systems, the second reviews the scholarship on tax burden determinants, the third sketches the book’s argument, the fourth explains the research design and the fifth describes subsequent chapters.
Evidence is limited on how to synthesize and incorporate the views of stakeholders into a multisite pragmatic trial and how much academic teams change study design and protocol in response to stakeholder input. This qualitative study describes how stakeholders contributed to the design, conduct, and dissemination of findings of a multisite pragmatic clinical trial, the COMprehensive Post-Acute Stroke Services (COMPASS) Study. We engaged stakeholders as integral research partners by embedding them in study committees and community resource networks that supported local sites. Data stemmed from formal focus groups and continuous participation in working groups. Guided by Grounded Theory, we extracted themes from focus group and meeting notes. These were discussed as a team and with other stakeholder groups for feasibility. A consensus approach was used. Stakeholder input changed many aspects of the study including: the care model that treated stroke as a chronic condition after hospital discharge, training for hospital-based providers who often lacked awareness of the barriers to recovery that patients face, support for caregivers who were essential for stroke patients’ recovery, and for community-based health and social service providers whose services can support recovery yet often go underutilized. Stakeholders brought value to both pragmatic research and health service delivery. Future studies should test the impact of elements of study implementation informed by stakeholders vs those that are not.
Paradoxically, doing corpus linguistics is both easier and harder than it has ever been before. On the one hand, it is easier because we have access to more existing corpora, more corpus analysis software tools, and more statistical methods than ever before. On the other hand, reliance on these existing corpora and corpus linguistic methods can potentially create layers of distance between the researcher and the language in a corpus, making it a challenge to do linguistics with a corpus. The goal of this Element is to explore ways for us to improve how we approach linguistic research questions with quantitative corpus data. We introduce and illustrate the major steps in the research process, including how to: select and evaluate corpora, establish linguistically-motivated research questions, observational units and variables, select linguistically interpretable variables, understand and evaluate existing corpus software tools, adopt minimally sufficient statistical methods, and qualitatively interpret quantitative findings.
This chapter discusses salient methodological considerations and challenges in undertaking empirical research with young, newly arrived migrant students. This includes questions relating to negotiating access, sampling of core participants, the role of language and use of interpreters, and the importance of giving migrant students a voice as part of an overall holistic approach which focuses on the student perspective and the relationship of this to school and parental perspectives. Approaches to assessing language development and social integration are explored. Such considerations raise questions about the relevance of conducting research with newcomer migrant students in a range of different countries and contexts. This chapter also provides an overview of the research design adopted in the studies funded by the Bell Foundation and explores how such methodological considerations were taken into account throughout the study.
The main feature of observational studies is the representation of naturalistic treatment conditions. In contrast to clinical trials, they allow the evaluation and quantification of adverse event profiles of drugs under “real life” conditions. The price for this unquestionable chance is the proneness to distorting factors, which may aggravate the interpretation of the study results. Analysis of observational study results therefore has to control for potentially influential factors and reconsider possible alternatives explaining observed associations. The most important distorting factors, which should be taken into account during analysis and interpretation are under-reporting, event selection, bias, confounding and misusage? Authors and readers of such study results should be aware of this possible sources of error, in order to derive optimal benefit from this study approach.
Under which conditions will a public authority intervene in private governance such as certification and eco-labeling schemes for sustainably produced goods? This chapter introduces this research question by presenting the empirical puzzle the book addresses: Why has the European Union (EU) intervened in private governance that deals with organic agriculture and biofuels, but has not intervened in private governance dealing with fair trade and fisheries? The chapter distinguishes between a public authority intervening with standards regulation that involves creating a public definition of sustainable production, and with procedural regulation that addresses the way private governance schemes are organized. The argument the book develops is that whether a public authority intervenes with standards and/or procedural regulation depends on the interplay of two variables: the domestic benefits of product differentiation by a public authority and the fragmentation of the private governance market. The chapter situates the book in the current state of the literature on the interactions between public and private governance and explains the research design and research contributions.
Experimental psychopathology is the psychological science discipline that uses the methods of the experimental psychology laboratory in conjunction with quantitative analytic approaches to gain leverage on the etiology and pathogenesis of psychopathology, within a brain-based (genomic, endophenotype, neurobiological) diathesis-stressor matrix. Laboratory methods provide precision in measurement not attainable through clinical rating approaches and experimental design options allow the investigator to better identify potentially causal as well as maintaining processes in psychopathology. The chapter provides both a historical context within which experimental psychopathology can be placed and identifies conceptual and methodological features of the approach. A number of issues are addressed: (a) the value of clinical observation; (b) context of discovery; (c) counting vs. rating in data collection; (d) the falsity of the null hypothesis in statistical testing; (e) levels of analysis; (f) how predictors are conceived of in many instances; (g) the importance of embracing heterogeneity in empirical data; (h) specific etiology and genetics; (i) emergence; and (j) causality in a correlational framework. This overview is intended to convey defining features of the experimental psychopathology approach.
Health psychology and behavioral medicine are founded on the biopsychosocial model, in which health and disease reflect reciprocal influences among biological, psychosocial, and sociocultural processes. As a result, research methods in these fields draw on concepts and methods from several disciplines and often require their integration. Health psychology and behavioral medicine include three major topics: health behavior and risk reduction; psychosocial aspects of medical illness and medical care; and psychosocial and psychobiological influences on disease. This chapter emphasizes methodological challenges in the third topic, although the issues discussed are broadly relevant to the others. Conceptualization and measurement of health endpoints presents evolving challenges in which measured outcomes must capture specific and well-defined aspects of health and disease. In the identification of psychosocial predictors of health outcomes, psychosocial epidemiology research must address a variety of challenges, including the conceptualization, measurement, and analysis of overlapping risk factors. In research on the psychobiological mechanisms linking risk and resilience factors with health outcomes, theory-driven research should consider a broad range of interrelated physiological processes and multiple sources of pathogenic physiological activation. Across the various research topics, clear ties to conceptual models, consideration of developmental issues across the lifespan, the need to examine both between- and within-person associations in many research questions, and the importance of health disparities and related aspects of ethnic and cultural diversity are important in measurement, design, and analysis of biopsychosocial research.
This chapter outlines critical design decisions for longitudinal research and provides practical tips for managing such studies. It emphasizes that generative longitudinal studies are driven by conceptual and theoretical insights and describes four foundational design issues including questions about time lags and sample sizes. It then provides advice about how to manage a longitudinal study and reduce attrition. The chapter concludes by considering how the advice offered comports with recent discussions about ways to improve psychological science and providing recommended further reading.
Dr Nick Martin has made enormous contributions to the field of behavior genetics over the past 50 years. Of his many seminal papers that have had a profound impact, we focus on his early work on the power of twin studies. He was among the first to recognize the importance of sample size calculation before conducting a study to ensure sufficient power to detect the effects of interest. The elegant approach he developed, based on the noncentral chi-squared distribution, has been adopted by subsequent researchers for other genetic study designs, and today remains a standard tool for power calculations in structural equation modeling and other areas of statistical analysis. The present brief article discusses the main aspects of his seminal paper, and how it led to subsequent developments, by him and others, as the field of behavior genetics evolved into the present era.
Explatory research is an attempt to discover something new and interesting by working through a research topic and is the soul of good research. Exploratory studies, a type of exploratory research, tend to fall into two categories: those that make a tentative first analysis of a new topic and those that propose new ideas or generate new hypotheses on an old topic. This chapter examines the history of exploratory studies, offers a typology of exploratory studies, and proposes a new type of exploratory study that is especially helpful for theorizing empirical material at an early stage. It argues that exploratory studies are an important part of a social scientist’s toolkit.
Research in the natural sciences follows a cycle from exploratory research with tentative findings (descriptive and correlational work) to more definitive causal claims. This chapter argues that the social sciences (particularly political science) would benefit from a wider application of this research cycle model. Creating space in top journals for tentative (novel) conclusions instead of precise estimation of causal effects could lead to greater causal explanations in the long term. This division of labor within the research cycle would require changing evaluations of a work’s contribution to research based on its location within the cycle. Causal explanations are still the goal of research in this model, but are facilitated by an openness to preliminary and tentative findings.
Whilst a great deal of progress has been made in recent decades, concerns persist about the course of the social sciences. Progress in these disciplines is hard to assess and core scientific goals such as discovery, transparency, reproducibility, and cumulation remain frustratingly out of reach. Despite having technical acumen and an array tools at their disposal, today's social scientists may be only slightly better equipped to vanquish error and construct an edifice of truth than their forbears – who conducted analyses with slide rules and wrote up results with typewriters. This volume considers the challenges facing the social sciences, as well as possible solutions. In doing so, we adopt a systemic view of the subject matter. What are the rules and norms governing behavior in the social sciences? What kinds of research, and which sorts of researcher, succeed and fail under the current system? In what ways does this incentive structure serve, or subvert, the goal of scientific progress?
David Skarbek argues that qualitative research methods can analyze institutions by exploiting complex evidence not accessible through quantitative methods. He suggests that well-done case studies and process tracing can meet some of the same tests of inference as statistical methods. Although Skarbek's critique and proposals mirror those of many other authors, including Ronald Coase, he nonetheless makes an important contribution. The brief, cogent, and instructive way he presents his advice and his defense of qualitative methods as a complement to mainstream methods rather than a confrontation, may be more persuasive than more confrontational arguments. As ‘datafication’ is quickly turning qualitative observations into quantitative data analyzed through machine learning, Skarbek's excellent advice on how to understand what is happening under different institutional settings could not be timelier.
Research Electronic Data Capture (REDCap) is a secure, web-based electronic data capture application for building and managing surveys and databases. It can also be used for study management, data transfer, and data export into a variety of statistical programs. REDcap was developed and supported by the National Center for Advancing Translational Sciences Program and is used in over 3700 institutions worldwide. It can also be used to track and measure stakeholder engagement, an integral element of research funded by the Patient-Centered Outcomes Research Institute (PCORI). Continuously and accurately tracking and reporting on stakeholder engagement activities throughout the life of a PCORI-funded trial can be challenging, particularly in complex trials with multiple types of engagement.
In this paper, we show our approach for collecting and capturing stakeholder engagement activities using a shareable REDCap tool in one of the PCORI’s first large pragmatic clinical trials (the Comprehensive Post-Acute Stroke Services) to inform other investigators planning cluster-randomized pragmatic trials. Benefits and challenges are highlighted for researchers seeking to consistently monitor and measure stakeholder engagement.
We describe how REDCap can provide a time-saving approach to capturing how stakeholders engage in a PCORI-funded study and reporting how stakeholders influenced the study in progress reports back to PCORI.
How can economists use qualitative evidence – such as archival materials, interviews, and ethnography – to study institutions? While applied economists typically rely on quantitative evidence and statistical estimation, many important aspects of institutions and institutional change appear in the form of qualitative evidence. This raises the question if, and how, social scientists trained in quantitative methods can exploit and analyze this evidence. This paper discusses two qualitative research methods that are both commonly used outside of economics: comparative case studies and process tracing. Drawing on existing research about crime and political revolutions, it discusses these two methods and how to implement them to improve institutional analysis.
This accessible guide provides clear, practical explanations of key research methods in business studies, presenting a step-by-step approach to data collection, analysis and problem solving. Readers will learn how to formulate a research question, choose an appropriate research method, argue and motivate, collect and analyse data, and present findings in a logical and convincing manner. The authors evaluate various qualitative and quantitative methods and their consequences, guiding readers to the most appropriate research design for particular questions. Furthermore, the authors provide instructions on how to write reports and dissertations in a clearly structured and concise style. Now in its fifth edition, this popular textbook includes new and dedicated chapters on data collection for qualitative research, qualitative data analysis, data collection for quantitative research, multiple regression, and additional methods of quantitative analysis. Cases and examples have been updated throughout, increasing the applicability of these research methods across various situations.
Conservation researchers are increasingly drawing on a wide range of philosophies, methods and values to examine conservation problems. Here we adopt methods from social psychology to develop a questionnaire with the dual purpose of illuminating diversity within conservation research communities and providing a tool for use in cross-disciplinary dialogue workshops. The questionnaire probes the preferences that different researchers have with regards to conservation science. It elicits insight into their motivations for carrying out research, the scales at which they tackle problems, the subjects they focus on, their beliefs about the connections between nature and society, their sense of reality as absolute or socially constituted, and their propensity for collaboration. Testing the questionnaire with a group of 204 conservation scientists at a student conference on conservation science, we illustrate the latent and multidimensional diversity in the research preferences held by conservation scientists. We suggest that creating opportunities to further explore these differences and similarities using facilitated dialogue could enrich the mutual understanding of the diverse research community in the conservation field.
Chapter 2 develops a theoretical framework for explaining how ideational and material forces shape states’ threat perceptions as well as the conditions for their interplay. This chapter develops the conception of security as both physical and ontological, in which the interaction of ideational and material forces can be analysed. The chapter shows that in some cases, ideational sources of threat are perceived as predominant, and, in other cases, material factors shape threat perception. To explain this variation, the chapter outlines the conditions of the interplay between ideational and material forces, based on the fluidity of the regime identity – assessed through mutability of the regime identity narrative – and the clarity of the relative power distribution – assessed through the multiplicity of available policy options to ensure physical security. The chapter also includes a research design section that discusses methods and case selection criteria. The book then explores the plausibility of this framework empirically by examining a number of cases that have been at the heart of historical and theoretical work on the international relations of the Middle East.
Good research design includes careful consideration of the number of independent observations (replicates) we need to test our predictions – the sample size. Some sampling decisions are beyond our control. For example, we may be limited by the number of specimens available, the animals we can observe, or the data we have at our disposal. Knowing in advance what we can and can’t test with our data will save wasted effort. This chapter covers how we use samples to study populations, the importance of statistical power, how to determine whether you have the power to test for an effect, and statistical precision.