We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The introduction of the book has two purposes. First, it explains why a normative theory of ECJ procedural and organisational law is needed. It puts forward three reasons: first, procedural and organisational design involves making important choices on the role of courts in society; second, the dominant normative approach to assessing the ECJ’s work, namely the focus on its methods of interpretation, faces a number of conceptual problems; and third, ECJ judicial reform is of great practical relevance and requires normative anchoring. Secondly, the introduction explains the empirical strategies the book pursues to investigate the ECJ’s inner workings. In particular, it explains how requests for access to adminstrative documents and statistical analysis is used in the book to get a better understanding how the ECJ’s procedural and organisational rules are applied in practice. Finally, the introduction summarises the core of the book’s argument.
Typhoid fever is a major cause of illness and mortality in low- and middle-income settings. We investigated the association of typhoid fever and rainfall in Blantyre, Malawi, where multi-drug-resistant typhoid has been transmitting since 2011. Peak rainfall preceded the peak in typhoid fever by approximately 15 weeks [95% confidence interval (CI) 13.3, 17.7], indicating no direct biological link. A quasi-Poisson generalised linear modelling framework was used to explore the relationship between rainfall and typhoid incidence at biologically plausible lags of 1–4 weeks. We found a protective effect of rainfall anomalies on typhoid fever, at a two-week lag (P = 0.006), where a 10 mm lower-than-expected rainfall anomaly was associated with up to a 16% reduction in cases (95% CI 7.6, 26.5). Extreme flooding events may cleanse the environment of S. Typhi, while unusually low rainfall may reduce exposure from sewage overflow. These results add to evidence that rainfall anomalies may play a role in the transmission of enteric pathogens, and can help direct future water and sanitation intervention strategies for the control of typhoid fever.
The measurement and communication of the effect size of an independent variable on a dependent variable is critical to effective statistical analysis in the Social Sciences. We develop ideas about how to extend traditional methods of evaluating relationships in multivariate models to explain and illustrate the statistical power of a focal independent variable. Even with a growing acceptance of the need to report effect sizes, scholars in the management community have few well-established protocols or guidelines for reporting effect sizes. In this editorial essay, we: (1) review the necessity of reporting effect sizes; (2) discuss commonly used measures of effect size and accepted cut-offs for large, medium, and small effect sizes; (3) recommend standards for reporting effect sizes via verbal descriptions and graphical presentations; and (4) present best practice examples of reporting and discussing effect size. In summary, we provide guidance for authors on how to report and interpret effect sizes, advocating for rigor and completeness in statistical analysis.
This topic examines how demand relationships can be estimated from empirical data. The whole process of performing an empirical study is explained, starting from model specification, through the collection of data, statistical analysis and interpretation of results. The focus is on statistical analysis and the application of regression analysis using OLS. Different mathematical forms of the regression model are explained, along with the relevant transformations and interpretations. The concept of goodness of fit, and the coefficient of determination, are explained, along with their application in selecting the best model. The advantages of using multiple regression are discussed, and its implementation and interpretation. Analysis of variance (ANOVA) is explained, and how this relates to goodness of fit. The implications of empirical studies are also discussed, and the light they shed on economic theory. More advanced aspects, related to inferential statistics and hypothesis testing, are covered in an appendix, along with the assumptions involved in the classical linear regression model (CLRM) and consequences of the violation of these assumptions.
This chapter presents basic information for a wide readership on how accents differ and how those differences are analyzed, then lays out the sample of performances to be studied, the phonemes and word classes to be analyzed, and the methods of phonetic, quantitative, and statistical analysis to be followed.
The SARS-CoV-2 virus has made the largest pandemic of the 21st century, with hundreds of millions of cases and tens of millions of fatalities. Scientists all around the world are racing to develop vaccines and new pharmaceuticals to overcome the pandemic and offer effective treatments for COVID-19 disease. Consequently, there is an essential need to better understand how the pathogenesis of SARS-CoV-2 is affected by viral mutations and to determine the conserved segments in the viral genome that can serve as stable targets for novel therapeutics. Here, we introduce a text-mining method to estimate the mutability of genomic segments directly from a reference (ancestral) whole genome sequence. The method relies on calculating the importance of genomic segments based on their spatial distribution and frequency over the whole genome. To validate our approach, we perform a large-scale analysis of the viral mutations in nearly 80,000 publicly available SARS-CoV-2 predecessor whole genome sequences and show that these results are highly correlated with the segments predicted by the statistical method used for keyword detection. Importantly, these correlations are found to hold at the codon and gene levels, as well as for gene coding regions. Using the text-mining method, we further identify codon sequences that are potential candidates for siRNA-based antiviral drugs. Significantly, one of the candidates identified in this work corresponds to the first seven codons of an epitope of the spike glycoprotein, which is the only SARS-CoV-2 immunogenic peptide without a match to a human protein.
As mechanical simulations play an increasingly role in engineering projects, an appropriate integration of simulations into design-oriented product development processes is essential for efficient collaboration. To identify and overcome barriers between design and simulation departments, the BRIDGES approach was elaborated for barrier reduction in design engineering and simulation. This paper shows the industrial evaluation of the approach using a multi-method study of an online survey and focus group workshops. The experts' assessments were statistically analysed to build a connection matrix of barriers and recommendations. 59 participants from multiple industries with practical experience in the field contributed to the online survey, while 24 experts could be acquired for the focus group workshops. As a result of the workshops, both the data-based and the workshop-based part of the BRIDGES approach were assessed as beneficial to raise the efficiency of collaboration and practically applicable. This provides an empirically secured connection of barriers and suitable recommendations, allowing companies to identify and overcome collaboration barriers between design and simulation.
This paper presents an efficient water cycle algorithm based on the processes of water cycle with movement of streams and rivers in to the sea. This optimization algorithm is applied to obtain the optimal feasible path with minimum travel duration during motion planning of both single and multiple humanoid robots in both static and dynamic cluttered environments. This technique discards the rainfall process considering falling water droplets forming streams during raining and the process of flowing. The flowing process searches the solution space and finds the more accurate solution and represents the local search. Motion planning of humanoids is carried out in V-REP software. The performance of proposed algorithm is tested in experimental scenario under laboratory conditions and shows the developed algorithm performs well in terms of obtaining optimal path length and minimum time span of travel. Here, navigational analysis has been performed on both single as well as multiple humanoid robots. Statistical analysis of results obtained from both simulation and experimental environments is carried out for both single and multiple humanoids, along with the comparison with another existing optimization technique that indicate the strength and effectiveness of the proposed water cycle algorithm.
Measuring Behaviour is the established go-to text for anyone interested in scientific methods for studying the behaviour of animals or humans. It is widely used by students, teachers and researchers in a variety of fields, including biology, psychology, the social sciences and medicine. This new fourth edition has been completely rewritten and reorganised to reflect major developments in how behavioural studies are conducted. It includes new sections on the replication crisis, covering Open Science initiatives such as preregistration, as well as fully up-to-date information on the use of remote sensors, big data and artificial intelligence in capturing and analysing behaviour. The sections on the analysis and interpretation of data have been rewritten to align with current practices, with advice on avoiding common pitfalls. Although fully revised and revamped, this new edition retains the simplicity, clarity and conciseness that have made Measuring Behaviour a classic since the first edition appeared more than 30 years ago.
A method is presented to determine the feature resolution of physically relevant metrics of data obtained from segmented image sets. The presented method determines the best-fit distribution curve of a dataset by analyzing a truncated portion of the data. An effective resolvable size for the metric of interest is established when including parts of the truncated dataset results in exceeding a specified error tolerance. As such, this method allows for the determination of the feature resolution regardless of the processing parameters or imaging instrumentation. Additionally, the number of missing objects that exist below the resolution of the instrumentation may be estimated. The application of the developed method was demonstrated on data obtained via 2D scanning electron microscopy of a pressed explosive material and from 3D micro X-ray computed tomography of a polymer-bonded explosive material. It was shown that the minimum number of pixels/voxels required for the accurate determination of a physically relevant metric is dependent on the metric of interest. This proposed method, utilizing the prior knowledge of the distribution of metrics of interest, was found to be well suited to determine the feature resolution in applications where large datasets can be achieved.
This chapter ties together the findings from the previous substantive chapters, uses statistical techniques to unearth common patterns, and explores what the origins and governance of public services in the nineteenth century tell us about today’s welfare state.
The last decade has seen the development of a range of new statistical and computational techniques for analysing large collections of radiocarbon (14C) dates, often but not exclusively to make inferences about human population change in the past. Here we introduce rcarbon, an open-source software package for the R statistical computing language which implements many of these techniques and looks to foster transparent future study of their strengths and weaknesses. In this paper, we review the key assumptions, limitations and potentials behind statistical analyses of summed probability distribution of 14C dates, including Monte-Carlo simulation-based tests, permutation tests, and spatial analyses. Supplementary material provides a fully reproducible analysis with further details not covered in the main paper.
This accessible guide provides clear, practical explanations of key research methods in business studies, presenting a step-by-step approach to data collection, analysis and problem solving. Readers will learn how to formulate a research question, choose an appropriate research method, argue and motivate, collect and analyse data, and present findings in a logical and convincing manner. The authors evaluate various qualitative and quantitative methods and their consequences, guiding readers to the most appropriate research design for particular questions. Furthermore, the authors provide instructions on how to write reports and dissertations in a clearly structured and concise style. Now in its fifth edition, this popular textbook includes new and dedicated chapters on data collection for qualitative research, qualitative data analysis, data collection for quantitative research, multiple regression, and additional methods of quantitative analysis. Cases and examples have been updated throughout, increasing the applicability of these research methods across various situations.
We use statistical analyses to test our predictions using the measures we collect for our sample. Like all aspects of study design, we need to think carefully about our choice of analytical approach. Planning our data analysis in detail, before we collect your data, helps to determine what data we need to collect. It is very common to rush past the analysis plan and dive straight into collecting data. This is partly because statistics are not intuitive and can be intimidating. However, statistical analysis is an integral part of study design. We must understand statistics to understand the strengths, limitations, and potential biases of any research. This may seem daunting, but our understanding of statistics determines the quality of a study. The more we think about this now, the better our study will be. I begin this chapter with how to determine what sort of analyses we need and the need to consult a statistician when we design a study. Next, I cover problems associated with multiple testing and assessing multiple predictor variables. I explain how to prepare an analysis plan and suggest pre-registration.
In this article we describe a series of computer algorithms that generate prose rhythm data for any digitised corpus of Latin texts. Using these algorithms, we present prose rhythm data for most major extant Latin prose authors from Cato the Elder through the second century a.d. Next we offer a new approach to determining the statistical significance of such data. We show that, while only some Latin authors adhere to the Ciceronian rhythmic canon, every Latin author is ‘rhythmical’ — they just choose different rhythms. Then we give answers to some particular questions based on our data and statistical approach, focusing on Cicero, Sallust, Tacitus and Pliny the Younger. In addition to providing comprehensive new data on Latin prose rhythm, presenting new results based on that data and confirming certain long-standing beliefs, we hope to make a contribution to a discussion of digital and statistical methodology in the study of Latin prose rhythm and in Classics more generally. The Supplementary Material available online (https://doi.org/10.1017/S0075435819000881) contains an appendix with tables, data and code. This appendix constitutes a static ‘version of record’ for the data presented in this article, but we expect to continue to update our code and data; updates can be found in the repository of the Classical Language Toolkit (https://github.com/cltk/cltk).
This paper proposes a framework to assess the influence of Offshore Wind Farms (OWFs) on maritime traffic flow based on raw Automatic Identification System (AIS) data collected before and after the installation of the offshore wind turbines. The framework includes modules for data acquisition, data filtering and statistical analysis. The statistical analysis characterises the influence of an OWF on maritime traffic in terms of minimum passing distances and lateral distribution of the ship trajectories near the OWF. The framework is applied to a specific route for which AIS data is available before and after an OWF installation. The impacts of the OWF on marine traffic are diverse and depend on the ship type categories. This paper quantitatively characterises an OWF's influence on a specific route that is probabilistically modelled, which is important for further studies on OWF site selection and maritime traffic risk assessment and management.
The numerous kaolin deposits located in Patagonia, Argentina, have been formed by hypogene or supergene processes. The primary origin has been established from O18 and D isotopic composition of the main minerals, kaolinite and/or dickite, and from the behaviour of certain elements during the alteration. The aim of this paper was to find if there is a tool, other than oxygen-deuterium data, to establish the origin of the Patagonian kaolin deposits. To handle the large number of variables per sample, a statistical multivariate study was used. The Principal Component method defines, on one hand the variables that better characterize each deposit and, on the other hand, the correlation between them. Fifty seven elements were considered and those that were not explained using these two components (which represent 75% of the total variance of the model) were discarded. As a result, the contents of Fe2O3, P2O5, LOI, Sr, Y, Zr, V, Pb, Hf, Rb, S and REE were used and the results show that the two components separate the deposits into two fields that are consistent with the process of formation. The first component indicates that Fe2O3, Y, Rb, U and HREE are more abundant in the supergene deposits, whereas, Sr, Pb, S and V are more abundant in the hypogene deposits. The second component shows that S, P2O5 and the LREE are enriched in the hydrothermal deposits, whereas Zr is more abundant in those formed under weathering conditions.
More than 19,000 analytical data mainly from the literature were used to study statistically the distribution patterns of F and the oxides of minor and trace elements (Ti, Sn, Sc, V, Cr, Ga, Mn, Co, Ni, Zn, Sr, Ba, Rb, Cs) in trioctahedral micas of the system phlogopite-annite/siderophyllite-polylithionite (PASP), which is divided here into seven varieties, whose compositional ranges are defined by the parameter mgli (= octahedral Mg minus Li). Plots of trace-element contents vs. mgli reveal that the elements form distinct groups according to the configuration of their distribution patterns. Substitution of most of these elements was established as a function of mgli. Micas incorporate the elements in different abundances of up to four orders of magnitude between the concentration highs and lows in micas of ‘normal’ composition. Only Zn, Sr and Sc are poorly correlated to mgli. In compositional extremes, some elements (Zn, Mn, Ba, Sr, Cs, Rb) may be enriched by up to 2–3 orders of magnitude relative to their mean abundance in the respective mica variety. Mica/melt partition coefficients calculated for Variscan granites of the German Erzgebirge demonstrate that trace-element partitioning is strongly dependent on the position of the mica in the PASP system, which has to be considered in petrogenetic modelling.
This review indicates that for a number of trace elements, the concentration ranges are poorly known for some of the mica varieties, as they are for particular host rocks (i.e. igneous rocks of A-type affiliation). The study should help to develop optimal analytical strategies and to provide a tool to distinguish between micas of ‘normal’ and ‘abnormal’ trace-element composition.
This research investigates the long-forgotten relationship between diphtheria and tuberculosis. Historical medical reports from the late 19th century are reviewed followed by a statistical regression analysis of the relationship between the two diseases in the early 20th century. Historical medical reports show a consistent association between diphtheria and tuberculosis that can increase the likelihood and severity of either disease in a co-infection. The statistical analysis uses historical weekly public health data on reported cases in five American cities over a period of several years, finding a modest but statistically significant relationship between the two diseases. No current medical theory explains the association between diphtheria and tuberculosis. Alternative explanations are explored with a focus on how the diseases assimilate iron. In a co-infection, the effectiveness of tuberculosis at assimilating extracellular iron may lead to increased production of diphtheria toxin, worsening that disease, which may, in turn, exacerbate tuberculosis. Iron-dependent repressor genes connect both diseases.
Whether the latitudinal distribution of climate-sensitive lithologies is stable through greenhouse and icehouse regimes remains unclear. Previous studies suggest that the palaeolatitudinal distribution of palaeoclimate indicators, including coals, evaporites, reefs and carbonates, has remained broadly similar since the Permian period, leading to the conclusion that atmospheric and oceanic circulation control their distribution rather than the latitudinal temperature gradient. Here we revisit a global-scale compilation of lithologic indicators of climate, including coals, evaporites and glacial deposits, back to the Devonian period. We test the sensitivity of their latitudinal distributions to the uneven distribution of continental areas through time and to global tectonic models, correct the latitudinal distributions of lithologies for sampling- and continental area-bias, and use statistical methods to fit these distributions with probability density functions and estimate their high-density latitudinal ranges with 50% and 95% confidence intervals. The results suggest that the palaeolatitudinal distributions of lithologies have changed through deep geological time, notably a pronounced poleward shift in the distribution of coals at the beginning of the Permian. The distribution of evaporites indicates a clearly bimodal distribution over the past ~400 Ma, except for Early Devonian, Early Carboniferous, the earliest Permian and Middle and Late Jurassic times. We discuss how the patterns indicated by these lithologies change through time in response to plate motion, orography, evolution and greenhouse/icehouse conditions. This study highlights that combining tectonic reconstructions with a comprehensive lithologic database and novel data analysis approaches provide insights into the nature and causes of shifting climatic zones through deep time.