We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book opens with some introductory notes on the two ecumenical synods, marking the discrepancy between their importance in the festival world of the Principate and the obscurity they have fallen into in present-day scholarship. This is mainly due to the extremely fragmentary source material on their history and organisation. The ecumenical synods are mainly known from inscriptions, often heavily damaged, and papyri from Egypt. These diverse sources present us with a complex and often contradictory view. The most important documents for this study are decrees drawn up by the synods, their correspondence with emperors and membership certificates. A great variety in names and titles further complicates our understanding of the synods. Nevertheless, there are a number of basic elements that recur in the documents promulgated by the synods themselves, which are discussed briefly. The final part of the introduction sets out the structure of the book as well as the basic principles that form the core of the argumentation.
To be able to assess animal welfare the researcher must presuppose a number of background assumptions that cannot be tested by means of ordinary empirical data collection. In order to substantiate these assumptions two sorts of inferences have to be relied upon, which the authors designate by the terms ‘analogies’ and ‘homologies’. Analogies are evaluative, philosophical reflections by means of which it is made clear what provisions or states constitute the welfare of humans and other animals. By means of analogies it may, for example be argued that animal welfare consists of subjective experiences such as pain, boredom, pleasure and expectation. Also by means of analogies the relative ‘weight’ of these states can be decided. Homologies are part of theoretical science. They serve to clarify how the relevant experiences are linked to measurable anatomical, physiological and behavioural parameters.
An account is given of the steps which have to be taken to give a full answer to a question concerning the welfare of animals. In the account only farm animals are mentioned, but the same steps, of course, also have to be taken to answer questions concerning the welfare of other kinds of animals be they companion, laboratory, zoo or wild. Eight steps are described, and it is argued that both analogies and homologies are needed at very fundamental levels. Therefore, if animal welfare science is to provide relevant, rational and reliable answers to questions concerning animal welfare, it must be an interdisciplinary inquiry involving philosophical reflections and theoretical biology.
Open government and open data are often presented as the Asterix and Obelix of modern government—one cannot discuss one, without involving the other. Modern government, in this narrative, should open itself up, be more transparent, and allow the governed to have a say in their governance. The usage of technologies, and especially the communication of governmental data, is then thought to be one of the crucial instruments helping governments achieving these goals. Much open government data research, hence, focuses on the publication of open government data, their reuse, and re-users. Recent research trends, by contrast, divert from this focus on data and emphasize the importance of studying open government data in practice, in interaction with practitioners, while simultaneously paying attention to their political character. This commentary looks more closely at the implications of emphasizing the practical and political dimensions of open government data. It argues that researchers should explicate how and in what way open government data policies present solutions to what kind of problems. Such explications should be based on a detailed empirical analysis of how different actors do or do not do open data. The key question to be continuously asked and answered when studying and implementing open government data is how the solutions openness present latch onto the problem they aim to solve.
This systematic literature review aimed to provide an overview of the characteristics and methods used in studies applying the disability-adjusted life years (DALY) concept for infectious diseases within European Union (EU)/European Economic Area (EEA)/European Free Trade Association (EFTA) countries and the United Kingdom. Electronic databases and grey literature were searched for articles reporting the assessment of DALY and its components. We considered studies in which researchers performed DALY calculations using primary epidemiological data input sources. We screened 3053 studies of which 2948 were excluded and 105 studies met our inclusion criteria. Of these studies, 22 were multi-country and 83 were single-country studies, of which 46 were from the Netherlands. Food- and water-borne diseases were the most frequently studied infectious diseases. Between 2015 and 2022, the number of burden of infectious disease studies was 1.6 times higher compared to that published between 2000 and 2014. Almost all studies (97%) estimated DALYs based on the incidence- and pathogen-based approach and without social weighting functions; however, there was less methodological consensus with regards to the disability weights and life tables that were applied. The number of burden of infectious disease studies undertaken across Europe has increased over time. Development and use of guidelines will promote performing burden of infectious disease studies and facilitate comparability of the results.
Introduces the volume, identifying themes, methodology and goals; positions it in relation to other works; and outlines the chapters and their running order as well as those features that unite chapters or lead from one to the next.
In the domain of moral decision making, models in which emotion and deliberation constitute competing dual-systems have become increasingly popular. Currently, the favored explanation of this interaction is what Evans (2008) termed a “default-interventionist” (DI) process where moral decisions are the result of a prepotent emotional response, which can be overridden with substantial deliberative effort. Although this “emotion-then-deliberation” sequence is often assumed, existing methods have lacked the requisite process resolution to clearly depict the nature of this interaction. The present work utilized continuous mouse tracking, or response dynamics, to develop and test predictions of these DI models of moral decision making. Study 1 utilized previously published moral dilemmas to validate the method for use with such complex stimuli. Although the data replicated typical choice and RT patterns, the process metrics provided by the response trajectories did not demonstrate the online preference reversals predicted by DI models. Study 2 utilized more rigorously constructed stimuli and an alternative presentation format to provide the strongest possible test of DI predictions, but again failed to show the predicted reversals. In summary, neither experiment provided data in accordance with the predictions of popular DI dual-systems models, which suggests that researchers should consider models allowing for concurrent activation of deliberative and emotional systems, or reconceptualize moral decisions within the typical multiattribute decision framework.
Unless the benefits to society of measures to protect and improve the welfare of animals are made transparent by means of their valuation they are likely to go unrecognised and cannot easily be weighed against the costs of such measures as required, for example, by policy-makers. A simple single measure scoring system, based on the Welfare Quality® index, is used, together with a choice experiment economic valuation method, to estimate the value that people place on improvements to the welfare of different farm animal species measured on a continuous (0-100) scale. Results from using the method on a survey sample of some 300 people show that it is able to elicit apparently credible values. The survey found that 96% of respondents thought that we have a moral obligation to safeguard the welfare of animals and that over 72% were concerned about the way farm animals are treated. Estimated mean annual willingness to pay for meat from animals with improved welfare of just one point on the scale was £5.24 for beef cattle, £4.57 for pigs and £5.10 for meat chickens. Further development of the method is required to capture the total economic value of animal welfare benefits. Despite this, the method is considered a practical means for obtaining economic values that can be used in the cost-benefit appraisal of policy measures intended to improve the welfare of animals.
There are competing conceptions of animal welfare in the scientific literature. Debate among proponents of these various conceptions continues. This paper examines methodologies for use in attempting to justify a conception of animal welfare. It is argued that philosophical methodology relying on conceptual analysis has a central role to play in this debate. To begin, the traditional division between facts and values is refined by distinguishing different types of values, or norms. Once this distinction is made, it is argued that the common recognition that any conception of animal welfare is inherently normative is correct, but that it is not ethical normativity that is at issue. The sort of philosophical methodology appropriate to use in investigating the competing normative conceptions of animal welfare is explained. Finally, the threads of the paper are brought together to consider the appropriate role of recent empirical work into folk conceptions of animal welfare in determining the proper conception of animal welfare. It is argued that empirical results about folk conceptions are useful inputs into conceptual philosophical investigation into the competing conceptions of animal welfare. Further mutual inquiry by philosophers and animal welfare scientists is needed to advance our knowledge of what animal welfare is.
In a series of recent experiments (Davis, Millner and Reilly, 2005, Eckel and Grossman, 2003, 2005a-c, 2006), matching subsidies generate significantly higher charity receipts than do theoretically equivalent rebate subsidies. This paper reports a laboratory experiment conducted to examine whether the higher receipts are attributable to a relative preference for matching subsidies or to an “isolation effect” (McCaffery and Baron, 2003, 2006). Some potential policy implications of isolation effects on charitable contributions are also considered.
Scott, Inbar and Rozin (2016) presented evidence that trait disgust predicts opposition to genetically modified food (GMF). Royzman, Cusimano, and Leeman (2017) argued that these authors did not appropriately measure trait disgust (disgust qua oral inhibition or OI) and that, once appropriately measured, the hypothesized association between disgust and GMF attitudes was not present. In their commentary, Inbar and Scott (2018) challenge our conclusions in several ways. In this response, we defend our conclusions by showing (a) that OI is psychometrically distinct from other affective categories, (b) that OI is widely held to be the criterial feature of disgust and (c) that we were well-justified to pair OI with the pathogen-linked vignettes that we used. Furthermore, we extend our critique to the new findings presented by Inbar and Scott (2018); we show that worry and suspicion (not disgust) are the dominant affective states one is likely to experience while thinking about GMF and that the true prevalence of disgust is about zero. We conclude by underscoring that the present argument and findings are a part of a larger body of evidence challenging any causal effect of disgust on morality.
In this introduction to the special issue on methodology, we provide background on its original motivation and a systematic overview of the contributions. The latter are discussed with correspondence to the phase of the scientific process they (most strongly) refer to: Theory construction, design, data analysis, and cumulative development of scientific knowledge. Several contributions propose novel measurement techniques and paradigms that will allow for new insights and can thus avail researchers in JDM and beyond. Another set of contributions centers around how models can best be tested and/or compared. Especially when viewed in combination, the papers on this topic spell out vital necessities for model comparisons and provide approaches that solve noteworthy problems prior work has been faced with.
The history of judgment and decision making is defined by a trend toward increasingly nuanced explanations of the decision making process. Recently, process models have become incredibly sophisticated, yet the tools available to directly test these models have not kept pace. These increasingly complex process models require increasingly complex process data by which they can be adequately tested. We propose a new class of data collection that will facilitate evaluation of sophisticated process models. Tracking mouse paths during a continuous response provides an implicit measure of the growth of preference that produces a choice—rather than the current practice of recording just the button press that indicates that choice itself. Recent research in cognitive science (Spivey & Dale, 2006) has shown that cognitive processing can be revealed in these dynamic motor responses. Unlike current process methodologies, these response dynamics studies can demonstrate continuous competition between choice options and even online preference reversals. Here, in order to demonstrate the mechanics and utility of the methodology, we present an example response dynamics experiment utilizing a common multi-alternative decision task.
Welfare issues relevant to equids working in developing countries may differ greatly to those of sport and companion equids in developed countries. In this study, we test the observer reliability of a working equine welfare assessment, demonstrating how prevalence of certain observations reduces reliability ratings. The assessment included behaviour, general health, wounds, and limb and foot pathologies. In Study 1, agreement between five observers and their trainer (the ‘gold standard’) was assessed using 80 horses and 80 donkeys in India. Intra-observer agreement was later tested on 40 of each species. Study 2 took place in Egypt, using nine observers, their trainer, 30 horses and 30 donkeys, adjusting some scoring systems and providing observers with more detailed guidelines than in Study 1. Percentage agreements, Fleiss kappa (with a weighted version for ordinal scores) and prevalence indices were calculated for each variable. Reliability was similar across both studies, but was significantly poorer for donkeys than horses. Age, sex, certain wounds and (for horses alone) body condition, consistently attained clinically-useful reliability. Hoofhorn quality, point-of-hock lesions, mucous membrane abnormalities, limb-tether lesions, and skin tenting showed poor reliability. Reporting the prevalence index alongside the percentage agreement showed that, for many variables, the populations were too homogenous for conclusive reliability ratings. Suggestions are made for improving scoring systems showing poor reliability, but future testing will require deliberate selection of a more diverse equine population. This could prove challenging given that, in both populations of horses and donkeys studied here, many pathologies apparently showed 90-100% prevalence.
A number of studies have investigated anticipatory behaviour in animals as a measure of sensitivity to reward or as an expression of emotional state. A common feature of many studies is that they base inferences on seemingly arbitrary measures, for example, the frequency of behavioural transitions (ie number of times an animal switches between different behaviours). This paper critically reviews the literature and discusses various hypotheses for why specific behavioural responses occur in the anticipatory period between the signal and reward in conditioned animals. We argue that the specific behaviours shown may be the result of superstitious learning and thus highly variable, leaving behavioural transitions as the only response that can be scored consistently, and that sometimes these responses may relate more to frustration than to a positive emotional state. Finally, we propose new research approaches to avoid potential confounds and improve future studies on this topic.
Researchers frequently argue that within-subjects designs should be avoided because they result in research hypotheses that are transparent to the subjects in the study. This conjecture was empirically tested by replicating several classic between-subjects experiments as within-subjects designs. In two additional experiments, psychology students were given the within-subjects versions of these studies and asked to guess what the researcher was hoping to find (i.e. the research hypothesis), and members of the Society for Judgment and Decision Making (SJDM) were asked to predict how well students would perform this task. On the whole, students were unable to identify the research hypothesis when provided with the within-subjects version of the experiments. Furthermore, SJDM members were largely inaccurate in their predictions of the transparency of a within-subjects design.
As the adage goes, “money makes the world go round” – but which direction does it spin? This analysis considers how basic decision research can help us work out how to answer this question. It suggests that the difficulty of deriving clear predictions based on existing decision research is at least partly rooted in two restrictive conventions. The first is the focus on deviations from rational choice, and the effort to capture observed deviations by assuming subjective value functions. While it is difficult to reject the hypothesis that choice behavior reflects the weighting of subjective values, it is not clear that it advances the derivation of useful predictions. A second restrictive convention is the focus on objective hypothesis testing, which favors analyses that evaluate small refinements of the popular models. The potential benefits of relaxing these conventions are considered, with reference to recent choice prediction competitions that facilitate the exploration of distinct assumptions and model development techniques. The winners in these competitions assume very different decision processes than those assumed by the popular “subjective functions” models. The relationship of the results to the big data revolution is discussed.
According to Karl Popper, we can tell good theories from poor ones by assessing their empirical content (empirischer Gehalt), which basically reflects how much information they convey concerning the world. “The empirical content of a statement increases with its degree of falsifiability: the more a statement forbids, the more it says about the world of experience.” Two criteria to evaluate the empirical content of a theory are their level of universality (Allgemeinheit) and their degree of precision (Bestimmtheit). The former specifies how many situations it can be applied to. The latter refers to the specificity in prediction, that is, how many subclasses of realizations it allows. We conduct an analysis of the empirical content of theories in Judgment and Decision Making (JDM) and identify the challenges in theory formulation for different classes of models. Elaborating on classic Popperian ideas, we suggest some guidelines for publication of theoretical work.
The recognition heuristic (RH) — which predicts non-compensatory reliance on recognition in comparative judgments — has attracted much research and some disagreement, at times. Most studies have dealt with whether or under which conditions the RH is truly used in paired-comparisons. However, even though the RH is a precise descriptive model, there has been less attention concerning the precision of the methods applied to measure RH-use. In the current work, I provide an overview of different measures of RH-use tailored to the paradigm of natural recognition which has emerged as a preferred way of studying the RH. The measures are compared with respect to different criteria — with particular emphasis on how well they uncover true use of the RH. To this end, both simulations and a re-analysis of empirical data are presented. The results indicate that the adherence rate — which has been pervasively applied to measure RH-use — is a severely biased measure. As an alternative, a recently developed formal measurement model emerges as the recommended candidate for assessment of RH-use.
I begin the book by providing an overview of recent political economy literature on ethnicity, which largely assumes that ethnicity is fixed and unchanging despite decades of evidence to the contrary. I then introduce my argument as an attempt to explain ethnic change. I first argue that people hold multiple ethnic identities simultaneously, and that individuals emphasize the one that brings them the most benefits. I then build upon earlier theories from Marx and Gellner to claim that industrialization is the most powerful factor that leads people to re-identify with larger ethnic groups, and that this process of assimilation is induced by the decline in the relative value of land. Inasmuch as the process of industrialization is inherently uneven, however, I suggest that assimliation should proceed unevenly as well. Finally I claim that the major role played by states in my theory is in their ability to promote or inhibit industrialization, not through assimilationist policies. I then go on to establish the scope conditions of my argument, namely the way I focus on ethnic change in non-violent contexts while also limiting myself to non-immigrant communities.
This chapter concerns two elemental aspects of parenting research: foundational theories and the establishment of research into parent-child relationships. The six essential theories that have formed the groundwork for understanding parenting are reviewed. These theories are: evolution, attachment, socialization, behavioral genetics, social cognition, and systems. While the earliest theories were developing, research into parenting began to be published. Empirical studies about child rearing appeared in journals with some regularity beginning in the 1930s. Around the same time, child study centers and the interest in child guidance and parent education emerged. Researchers into parent-child relationships have adopted different theoretical approaches, taken multiple and often dissimilar methods, and addressed diverse questions. Many of the studies can be characterized into one of eight approaches: trait, child effects and transactions, social learning, social address, social cognition, behavioral genetics, ecological momentary, and large sample, longitudinal datasets. These approaches are described and contrasted. The chapter ends with discussion of some of the current research trends.