We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most farm assurance schemes in the UK at least, in part, aim to provide assurances to consumers and retailers of compliance with welfare standards. Inclusion of welfare outcome assessments into the relevant inspection procedures provides a mechanism to improve animal welfare within assurance schemes. In this study, taking laying hens as an example, we describe a process for dealing with the practical difficulties in achieving this in two UK schemes; Freedom Food and Soil Association. The key challenges arise from selecting the most appropriate measures, defining sampling strategies that are feasible and robust, ensuring assessors can deliver a consistent evaluation and establishing a mechanism to achieve positive change. After a consultation exercise and pilot study, five measures (feather cover, cleanliness, aggressive behaviour, management of sick or injured birds, and beak trimming) were included within the inspection procedures of the schemes. The chosen sampling strategy of assessing 50 birds without handling provided reasonable certainty at a scheme level but less certainty at an individual farm level. Despite the inherent limitations within a time and cost sensitive certification assessment, the approach adopted does provide a foundation for welfare improvement by being able to highlight areas of concern requiring attention, enabling schemes to promote the use of outcome scoring as a management tool, promoting the dissemination of relevant technical information in a timely manner and increasing the scrutiny of standards important for the welfare of the birds.
The present research addresses advice taking from a holistic perspective covering both advice seeking and weighting. We build on previous theorizing that assumes that underweighting of advice results from biased samples of information. That is, decision makers have more knowledge supporting their own judgment than that of another person and thus weight the former stronger than the latter. In the present approach, we assume that participants reduce this informational asymmetry by the sampling of advice and that sampling frequency depends on the information ecology. Advice that is distant from the decision maker’s initial estimate should lead to a higher frequency of advice sampling than close advice. Moreover, we assume that advice distant from the decision maker’s initial estimate and advice that is supported by larger samples of advisory estimates are weighted more strongly in the final judgment. We expand the classical research paradigm with a sampling phase that allows participants to sample any number of advisory estimates before revising their judgments. Three experiments strongly support these hypotheses, thereby advancing our understanding of advice taking as an adaptive process.
We introduce a variant of Shepp’s classical urn problem in which the optimal stopper does not know whether sampling from the urn is done with or without replacement. By considering the problem’s continuous-time analog, we provide bounds on the value function and, in the case of a balanced urn (with an equal number of each ball type), an explicit solution is found. Surprisingly, the optimal strategy for the balanced urn is the same as in the classical urn problem. However, the expected value upon stopping is lower due to the additional uncertainty present.
Replication is an important tool used to test and develop scientific theories. Areas of biomedical and psychological research have experienced a replication crisis, in which many published findings failed to replicate. Following this, many other scientific disciplines have been interested in the robustness of their own findings. This chapter examines replication in primate cognitive studies. First, it discusses the frequency and success of replication studies in primate cognition and explores the challenges researchers face when designing and interpreting replication studies across the wide range of research designs used across the field. Next, it discusses the type of research that can probe the robustness of published findings, especially when replication studies are difficult to perform. The chapter concludes with a discussion of different roles that replication can have in primate cognition research.
Our exploration of the 4D Framework uses an eclectic set of methodological techniques; Chapter 3 is an overview of the methodological core of our inquiry. We explain the key operationalizations of the 4D Framework and provide context and details for the studies that appear in multiple chapters throughout the book. We specifically describe how we measure contextual features of a discussion, such as disagreement, as well as the scales we use to measure individual dispositions, such as social anxiety. We explain the utility of psychophysiological data for our purposes and describe the research design details for the studies that use psychophysiological data. We provide details on our survey samples from which several analyses throughout the book are derived.
Chapter 5 describes the process of constructing the MI 1984–2014 Corpus, from compiling the sampling frame (e.g. search terms, dates covered) to the compilation procedure for the illness and year subcorpora. In particular, a detailed discussion of the interpretative status of search terms is provided. Practical issues related to compiling corpora such as cleaning the data are discussed. Furthermore, the problems that the interdisciplinary nature of corpus construction poses for the researcher are outlined.
Two methods are applied to detect differences between corpus (sub )registers, exemplified by the press editorials sections in the British, Canadian and Jamaican components of the International Corpus of English. By design, these methods are apt to target differences between varieties that are represented by putatively comparable corpus material, but it turns out that many of the observed differences can in fact be laid at the door of different sampling strategies applied by corpus compilers. In the example at hand, contrasts can be traced back to the division into institutional and personal editorials. This finding gives rise to a call for a higher granularity of sampling schemes, richer metadata (e.g. on the situational characteristics of the language samples included), and better documentation. As for the methods chosen, the author demonstrates that corpus-driven profiling based either on POS monograms or on higher-level multi-dimensional analysis performs reasonably well, with smaller differences in robustness and computational expense.
Chapter 2 starts by placing experiments in the scientific process – experiments are only useful in the context of well-motivated questions, thoughtful theories, and falsifiable hypotheses. The author then turns to sampling and measurement since careful attention to these topics, despite being often neglected by experimentalists, are imperative. The remainder of Chapter 2 offers a detailed discussion of causal inference that is used to motivate an inclusive definition of “experiments.” The author views this as more than a pedantic exercise, as careful consideration of approaches to causal inference reveals the often implicit assumptions that underlie all experiments. The chapter concludes by touching on the different goals experiments may have and the basics of analysis. The chapter serves as a reminder of the underlying logic of experimentation and the type of mindset one should have when designing experiments. A central point concerns the importance of counterfactual thinking, which pushes experimentalists to think carefully about the precise comparisons needed to test a causal claim.
We show that attending to domain considerations in corpus design involves three steps: (1) describing the domain as fully as possible; (2) operationalizing the domain; (3) sampling the texts. Describing the domain requires defining the boundaries of the domain: what texts belong within the domain and what do not? Describing the domain requires identifying important internal categories of texts that reflect qualitative variation within the domain. Domain description should be carried out systematically using a range of sources that can be evaluated for quality and triangulated. Operationalizing the domain refers to specifying the set of texts that are available for sampling; operational domains are always precisely bounded and specified. A sampling frame is an itemized list of all texts (from the operational domain) that are available for sampling. A sampling unit is the individual “object” (usually a text) that will be included in the corpus. Stratification is the process of collecting texts according to identified categories within the domain, and is usually desirable in corpus design. Proportionality refers to the relative sizes of strata within the sample. Strata can be proportional or equal-sized. Sampling methods can be broadly categorized as random and nonrandom.
The chapter describes the assumptions and goals of large n statistical studies and evaluates the extent to which they generate knowledge about international relations.
The goal of the paper is to obtain analogs of the sampling theorems and of the Riesz–Boas interpolation formulas which are relevant to the discrete Hilbert and Kak–Hilbert transforms in
$l^{2}$
.
Since the objective of our research is to examine the influence of the creative potential of language speakers on their creative performance in the formation and interpretation of new/potential complex words, there are several fundamental methodological principles that have to be taken into consideration. First of all is the method of measuring the creative potential of language speakers and the methods of testing their creative performance Therefore, this chapter introduces the Torrance Test of Creative Thinking, accounts for the basic characteristics of creativity indicators and subscores, and justifies its relevance to our research. Furthermore, it presents a word-formation test and a word-interpretation test and accounts for their objectives and principles of evaluation. A sample of respondents, comprising a group of secondary school students and a group of university students, is introduced. A method of evaluating the data is explained, based on the comparison of two cohorts with opposite scores. Finally, this chapter presents the hypotheses underlying our research.
Chapter 14 develops methods for reliability-based design optimization (RBDO). Three classes of RBDO problems are considered: minimizing the cost of design subject to reliability constraints, maximizing the reliability subject to a cost constraint, and minimizing the cost of design plus the expected cost of failure subject to reliability and other constraints. The solution of these problems requires the coupling of reliability methods with optimization algorithms. Among many solution methods available in the literature, the main focus in this chapter is on a decoupling approach using FORM, which under certain conditions has proven convergence properties. The approach requires the solution of a sequence of decoupled reliability and optimization problems that are shown to gradually approach a near-optimal solution. Both structural component and system problems are considered. An alternative approach employs sampling to compute the failure probability with the number of samples increasing as the optimal solution point is approached. Also described are approaches that make use of surrogate models constructed in the augmented space of random variables and design parameters. Finally, the concept of buffered failure probability is introduced as a measure closely related to the failure probability, which provides a convenient alternative in solving the optimization subproblem.
This chapter presents basic information for a wide readership on how accents differ and how those differences are analyzed, then lays out the sample of performances to be studied, the phonemes and word classes to be analyzed, and the methods of phonetic, quantitative, and statistical analysis to be followed.
This chapter examines skills developed by, and brought to play in, fieldwork. Progressing from generic skills used and refined through fieldwork, the discussion focuses on the geographical nature of skills used across all fieldwork activities, to the key geographical skills and tools that can be drawn upon to construct authentic fieldwork experiences for students. Fieldwork has always been an important facet of geography, helping to inform, validate, and consolidate the study of people and place. Fieldwork remains, to this day, rather simple and straightforward. It involves the gathering of primary data in the field. The ‘process’ of fieldwork occurs through the use and application of a wide variety of geographic and generic skills. The following discussion of fieldwork skills will examine the place of: • fieldwork skills in students wider learning, • fieldwork skills for thinking geographically, • specific geographic fieldwork skills, • geographic fieldwork tools and technology.
Chapter 4 focuses on lay people tackling small-scale reasoning problems in the lab. I identify five main principles that summarize a broad range of empirical findings. I argue that these principles – especially the use of causal thinking and simulation – are basic to human thought and not an optional add-on or rule of thumb. I also discuss the difficulties people have in evaluating their own reasoning processes.
Given a combinatorial search problem, it may be highly useful to enumerate its (all) solutions besides just finding one solution, or showing that none exists. The same can be stated about optimal solutions if an objective function is provided. This work goes beyond the bare enumeration of optimal solutions and addresses the computational task of solution enumeration by optimality (SEO). This task is studied in the context of answer set programming (ASP) where (optimal) solutions of a problem are captured with the answer sets of a logic program encoding the problem. Existing answer set solvers already support the enumeration of all (optimal) answer sets. However, in this work, we generalize the enumeration of optimal answer sets beyond strictly optimal ones, giving rise to the idea of answer set enumeration in the order of optimality (ASEO). This approach is applicable up to the best k answer sets or in an unlimited setting, which amounts to a process of sorting answer sets based on the objective function. As the main contribution of this work, we present the first general algorithms for the aforementioned tasks of answer set enumeration. Moreover, we illustrate the potential use cases of ASEO. First, we study how efficiently access to the next-best solutions can be achieved in a number of optimization problems that have been formalized and solved in ASP. Second, we show that ASEO provides us with an effective sampling technique for Bayesian networks.
Chapter 1 provides introductory information on empiricism, the stages and elements of the research process; quality criteria; basic types of research questions and methodological approaches, data collection, documentation and analysis, interpretation, reflection and presentation of the findings; and research ethics. Chapter 1 also contains detailed instructions for keeping research diaries and making poster presentations as forms of documentation and presentation which we recommend for student projects, as well as short exercises and recommendations for further reading.
Roller dung beetles play a pivotal role in the nutrient distribution in soil and secondary dispersal of seeds. Dung beetles are sampled either using a dung-baited pitfall trap or an exposed dung pile on the ground. While the former method is useful for a rapid survey of dung beetles, information on the ecology and behaviour of dung beetles can be lost, which the latter method provides, but underestimates species diversity due to its inefficiency in trapping rollers. Efficiency of a new method for sampling rollers—installing guarding pitfall traps around dung piles—is assessed in three habitats—contiguous tropical rainforests, fragmented forests, and disturbed used home gardens—and two diel periods—day and night. Five guarding pitfall traps were installed at 50 cm radius around dung piles. About 98% of the total rollers were sampled in pitfall traps. The habitats were similar when the roller catches of only dung piles—conventional approach—were analyzed, but were different when the rollers of guarding pitfall traps were considered. The roller abundance was negatively affected by forest fragmentation and land-use change. About 98% of the rollers were collected at daytime. Using guarding pitfall traps around dung piles is highly recommended for dung beetle diversity studies.