We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Hydrogen lithography has been used to template phosphine-based surface chemistry to fabricate atomic-scale devices, a process we abbreviate as atomic precision advanced manufacturing (APAM). Here, we use mid-infrared variable angle spectroscopic ellipsometry (IR-VASE) to characterize single-nanometer thickness phosphorus dopant layers (δ-layers) in silicon made using APAM compatible processes. A large Drude response is directly attributable to the δ-layer and can be used for nondestructive monitoring of the condition of the APAM layer when integrating additional processing steps. The carrier density and mobility extracted from our room temperature IR-VASE measurements are consistent with cryogenic magneto-transport measurements, showing that APAM δ-layers function at room temperature. Finally, the permittivity extracted from these measurements shows that the doping in the APAM δ-layers is so large that their low-frequency in-plane response is reminiscent of a silicide. However, there is no indication of a plasma resonance, likely due to reduced dimensionality and/or low scattering lifetime.
Scholars and policy makers need systematic assessments of the validity of the measures produced by V-Dem. In Chapter 6, we present our approach to comparative data validation – the set of steps we take to evaluate the precision, accuracy, and reliability of our measures, both in isolation and compared to extant measures of the same concepts. Our approach assesses the degree to which measures align with shared concepts (content validation), shared rules of translation (data generation assessment), and shared realities (convergent validation). Within convergent validity, we execute two convergent validity tests. First, we examine convergent validity as it is typically conceived – examining convergence between V-Dem measures and extant measures. Second, we evaluate the level of convergence across coders, considering the individual coder and country traits that predict coder convergence. Throughout the chapter, we focus on three indices included in the V-Dem data set: polyarchy, corruption, and core civil society. These three concepts collectively provide a “hard test” for the validity of our data, representing a range of existing measurement approaches, challenges, and solutions.
This chapter sets forth the conceptual scheme for the V–Dem project. We begin by discussing the concept of democracy. Next, we lay out seven principles by which this key concept may be understood – electoral, liberal, majoritarian, consensual, participatory, deliberative, and egalitarian. Each defines a “variety“ of democracy, and together they offer a fairly comprehensive accounting of the concept as used in the world today. Next, we show how this seven-part framework fits into our overall thinking about democracy, including multiple levels of disaggregation – to components, subcomponents, and indicators. The final section of the chapter discusses several important caveats and clarifications pertaining to this ambitious taxonomic exercise.
This chapter recounts how a project of this scale came together and why it has succeeded. Five main factors were responsible for V–Dem’s success: timing, inclusion, deliberation, administrative centralization, and fund–raising. First, planning for V-Dem began at a time when both social scientists and practitioners were realizing that they needed better democracy measures. This made it possible to recruit collaborators and find funding. Second, the leaders of the project were always eager to expand the team to acquire whatever expertise they lacked and share credit with everyone who contributed. Third, the project leaders practiced an intensely deliberative decision–making style to ensure that all points of view were consulted and only decisions that won wide acceptance were adopted. Fourth, centralizing the execution of the agreed–upon tasks helped tremendously by streamlining processes and promoting standardization, documentation, professionalization, and coordination of a large number of intricate steps. Finally, successful fund–raising from a mix of both research foundations and bilateral and multilateral organizations has been critical.
In this chapter we focus on the measurement of five key principles of democracy – electoral, liberal, participatory, deliberative, and egalitarian. For each principle, we discuss (1) the theoretical rationale for the selected indicators, (2) whether these indicators are correlated strongly enough to warrant being collapsed into an index, and (3) the justification of aggregation rules for moving from indicators to components and from components to higher–level indices. In each section we also (4) highlight the top– and bottom–five countries on each principle of democracy in early (1812 or 1912) and late (2012) years of our sample period, as well as the aggregate trend over the whole time period 1789–2017 (where applicable). Finally, we (5) look at how the different principles are intercorrelated in order to assess the trade–offs involved between the conceptual parsimony achieved by aggregating to a few general concepts and the retention of useful variation permitted by aggregating less.
Four characteristics of V-Dem data present distinct opportunities and challenges for explanatory analysis: (1) the large number of democracy indicators (i.e., variables), (2) the measurement of concepts by multiple coders filtered through the V-Dem measurement model, (3) the large number of years in the data set, and 4) the ex ante potential for dependence across countries (generically referred to as spatial dependence). This chapter discusses 3 challenges and 10 opportunities that are implied by these characteristics. At the end of this chapter, we also discuss three assumptions that are implicit in most analyses of observational indicators of macro-features at the national level, which aim to draw conclusions about causal relationships.
Varieties of Democracy is the essential user's guide to The Varieties of Democracy project (V-Dem), one of the most ambitious data collection efforts in comparative politics. This global research collaboration sparked a dramatic change in how we study the nature, causes, and consequences of democracy. This book is ambitious in scope: more than a reference guide, it raises standards for causal inferences in democratization research and introduces new, measurable, concepts of democracy and many political institutions. Varieties of Democracy enables anyone interested in democracy - teachers, students, journalists, activists, researchers and others - to analyze V-Dem data in new and exciting ways. This book creates opportunities for V-Dem data to be used in education, research, news analysis, advocacy, policy work, and elsewhere. V-Dem is rapidly becoming the preferred source for democracy data.
Users of V–Dem data should take care to understand how the data are generated because the data collection strategies have consequences for the validity, reliability, and proper interpretation of the values. Chapters 4 and 5 explain how we process the data after collecting the raw scores and how we aggregate the most specific indicators into more general indices. In this chapter we explain where the raw scores come from. We distinguish among the different types of data that V–Dem reports and describe the processes that produce each type and the infrastructure required to execute these processes.
V-Dem relies on country experts who code a host of ordinal variables, providing subjective ratings of latent – that is, not directly observable – regime characteristics. Sets of around five experts rate each case, and each rater works independently. Our statistical tools model patterns of disagreement between experts, who may offer divergent ratings because of differences of opinion, variation in scale conceptualization, or mistakes. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these estimates. This chapter describes item response theory models that can account and adjust for differential item functioning (i.e., differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e., random error). We also discuss key challenges specific to applying item response theory to expert–coded cross-national panel data, explain how we address them, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the end–user–accessible products of the V-Dem measurement model.
Attention-deficit/hyperactivity disorder (ADHD) is associated with a higher risk of burn injury than in the normal population. Nevertheless, the influence of methylphenidate (MPH) on the risk of burn injury remains unclear. This retrospective cohort study analysed the effect of MPH on the risk of burn injury in children with ADHD.
Method
Data were from Taiwan's National Health Insurance Research Database (NHIRD). The sample comprised individuals younger than 18 years with a diagnosis of ADHD (n = 90 634) in Taiwan's NHIRD between January 1996 and December 2013. We examined the cumulative effect of MPH on burn injury risk using Cox proportional hazards models. We conducted a sensitivity analysis for immortal time bias using a time-dependent Cox model and within-patient comparisons using the self-controlled case series model.
Results
Children with ADHD taking MPH had a reduced risk of burn injury, with a cumulative duration of treatment dose-related effect, compared with those not taking MPH. Compared with children with ADHD not taking MPH, the adjusted hazard ratio for burn injury was 0.70 in children taking MPH for <90 days (95% confidence interval (CI) 0.64–0.77) and 0.43 in children taking MPH for ≥90 days (95% CI 0.40–0.47), with a 50.8% preventable fraction. The negative association of MPH was replicated in age-stratified analysis using time-dependent Cox regression and self-controlled case series models.
Conclusion
This study showed that MPH treatment was associated with a lower risk of burn injury in a cumulative duration of treatment dose-related effect manner.
Two phases of diabase-sill-forming magmatism are recorded within the Badu anticline where magmas were emplaced into upper Palaeozoic carbonates and clastic rocks of the Youjiang fold-and-thrust belt in the SW South China Block, China. Zircons from these diabase units yield weighted mean U–Pb ages of 249.2±2.0 Ma and 187.1±3.3 Ma, and magmatic oxygen fugacity values from ‒20 to ‒6 (average of ‒12, equating to FMQ +5) and ‒20 to ‒10 (average of ‒15, equating to FMQ +2), respectively. These data indicate that the sills were emplaced during Early Triassic and Early Jurassic times. The discovery of c. 250 Ma mafic magmatism in this area was probably related to post-flood-basalt extension associated with the Emeishan mantle plume or rollback of the subducting Palaeo-Tethys slab. The c. 190 Ma diabase sills indicate that the southwestern South China Block records Early Jurassic mafic magmatism and lithospheric extension that was likely associated with a transition from post-collisional to within-plate tectonic regimes. The emplacement of diabase intrusions at depth may have driven hydrothermal systems, enabling the mobilization of elements from sedimentary rocks and causing the formation of a giant epigenetic metallogenic domain. The results indicate that high-oxygen-fugacity materials within basement rocks caused crustal contamination of the magmas, contributing to the wide range of oxygen fugacity conditions recorded by the Au-bearing Badu diabase. In addition, data from inherited xenocrystic zircons within the Badu diabase and detrital zircons from basement rocks suggest that the Neoproterozoic Jiangshao suture extends to the south of the Badu anticline.
In this brief report, computed tomography perfusion (CTP) thresholds predicting follow-up infarction in patients presenting <3 hours from stroke onset and achieving ultra-early reperfusion (<45 minutes from CTP) are reported. CTP thresholds that predict follow-up infarction vary based on time to reperfusion: Tmax >20 to 23 seconds and cerebral blood flow <5 to 7 ml/min−1/(100 g)−1 or relative cerebral blood flow <0.14 to 0.20 optimally predicted the final infarct. These thresholds are stricter than published thresholds.