We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A core normative assumption of welfare economics is that people ought to maximise utility and, as a corollary of that, they should be consistent in their choices. Behavioural economists have observed that people demonstrate systematic choice inconsistences, but rather than relaxing the normative assumption of utility maximisation they tend to attribute these behaviours to individual error. I argue in this article that this, in itself, is an error – an ‘error error’. In reality, a planner cannot hope to understand the multifarious desires that drive a person’s choices. Consequently, she is not able to discern which choice in an inconsistent set is erroneous. Moreover, those who are inconsistent may view neither of their choices as erroneous if the context reacts meaningfully with their valuation of outcomes. Others are similarly opposed to planners paternalistically intervening in the market mechanism to correct for behavioural inconsistencies, and advocate that the free market is the best means by which people can settle on mutually agreeable exchanges. However, I maintain that policymakers have a legitimate role in also enhancing people’s agentic capabilities. The most important way in which to achieve this is to invest in aspects of human capital and to create institutions that are broadly considered foundational to a person’s agency. However, there is also a role for so-called boosts to help to correct basic characterisation errors. I further contend that government regulations against self-interested acts of behavioural-informed manipulation by one party over another are legitimate, to protect the manipulated party from undesired inconsistency in their choices.
People live complicated lives and, unlike laboratory scientists who can control all aspects of their experiments, epidemiologists have to work with that complexity. As a result, no epidemiological study can ever be perfect. Even an apparently straightforward survey of, say, alcohol consumption in a community, can be fraught with problems. Who should be included in the survey? How do you measure alcohol consumption reliably? All we can do when we conduct a study is aim to minimise error as far as possible, and then assess the practical effects of any unavoidable error. A critical aspect of epidemiology is, therefore, the ability to recognise potential sources of error and, more importantly, to assess the likely effects of any error, both in your own work and in the work of others. If we publish or use flawed or biased research we spread misinformation that could hinder decision-making, harm patients and adversely affect health policy. Future research may also be misdirected, delaying discoveries that can enhance public health.
Failure is a fundamental part of the human condition. While archaeologists readily identify large-scale failures, such as societal collapse and site abandonment, they less frequently consider the smaller failures of everyday life: the burning of a meal or planning errors during construction. Here, the authors argue that evidence for these smaller failures is abundant in the archaeological record but often ignored or omitted in interpretations. Closer examination of such evidence permits a more nuanced understanding both of the mundane and the larger-scale failures of the human past. Excluding failure from the interpretative toolbox obscures the reconstruction of past lives and is tantamount to denying the humanity of past peoples.
The book opens with an odd fact of our time: we grow up having our writing corrected at every turn, and yet the actual writing most people do goes far beyond what is considered “correct English.” If we imagine a basic continuum of writing in English, it ranges from informal to formal, personal to impersonal, and interpersonal to informational writing. That range allows us to do all kinds of different things with writing. But only a small part of it is considered “correct,” because of what the book calls Language Regulation Mode. The introduction explains Language Regulation Mode, how it fixates on errors, and how it makes it hard to think about writing any other way. We learn to see writing only through the lens of writing myths, which tell us only some writing counts, and only some writers are smart and will succeed. Then, the introduction offers an alternative: Language Exploration Mode, which focuses on patterns instead of errors, and learning more about the diverse language of our world today--a continuum of informal digital writing, workplace writing, formal school writing, and otherwise, all correct for its purpose.
People read and write a range of English every day, yet what counts as 'correct' English has been narrowly defined and tested for 150 years. This book is written for educators, students, employers and scholars who are seeking a more just and knowledgeable perspective on English writing. It brings together history, headlines, and research with accessible visuals and examples, to provide an engaging overview of the complex nature of written English, and to offer a new approach for our diverse and digital writing world. Each chapter addresses a particular 'myth' of “correct” writing, such as 'students today can't write' or 'the internet is ruining academic writing', and presents the myth's context and consequences. By the end of the book, readers will know how to go from hunting errors to seeking (and finding) patterns in English writing today. This title is also available as open access on Cambridge Core.
In our laboratory, we currently perform verification of specimen identity (VSI) by having two laboratory personnel independently check and verify the identity of each patient, gamete specimen and embryo at each step that is susceptible to a mismatch so that we can assure accurate and complete traceability throughout the processes. Since this is the system currently in place in our laboratory, this method of verification and witnessing is the main focus of discussion in this chapter. In the future, we plan to implement an electronic witnessing system following as recommended by Chapman (2019).
Chapter 1 argues that Heidegger, like many reconstructive interpreters, takes up the main question posed by the first Critique and attempts to identify Kant’s most plausible line of response to it, consulting the claims in Kant’s text alongside Heidegger’s own beliefs. Because Heidegger seeks to attribute true claims to Kant, his method of interpretation resembles that of Davidson and Gadamer. However, Heidegger improves on their method, because he recognizes the methodological role of disagreement in coming to agree with some author, thereby making room for differences in view between interpreter and text. He argues that we should expect great thinkers to struggle with their subject matter, offering competing strands of argument as they attempt to work out their view. The interpreter, therefore, must isolate the most promising strand of argument, differentiating it from less compelling arguments. Accordingly, Heidegger offers a two-strand interpretation of Kant that differentiates an insightful line of argumentation prioritizing imagination from a less promising line prioritizing understanding. Further, Heidegger offers a theory of error explaining why Kant struggles with his subject matter: Kant retreats to his less compelling argument due to the anxiety he experiences in uncovering the fundamental structure of the human being.
Radiotherapy is an ever-changing field with constant technological advances. It is for this reason that risk management strategies are regularly updated in order to remain optimal.
Methodology:
A retrospective audit of all reported incidents and near misses in the audited department between 1 November 2020 and 30 April 2021 was performed. The root cause of each radiotherapy error (RTE), safety barrier (SB) and the causative factor (CF) would be defined by the Public Health England (PHE) coding system. The data will then be analysed to determine if there are any frequently occurring errors and if there are any existing relationships between multiple error.
Results:
670 patients were treated during the study period along with 35 reports generated. 77·1% (n = 27) were incidents, and 22·9% (n = 8) were near misses. 2·8% (n = 1) were reportable incidents. The ratio of RTEs to prescriptions was 0·052:1 (5·2%). 37% of RTEs were associated with image production. Slips and lapses were involved in 54·2%. Adherence to procedures/protocols was a factor in 48·5% (n = 17). Communication was a factor in 11·4% (n = 4).
Discussion:
The proportion of Level 1 incidents was higher in this department (2·8%) than in the PHE report (0·9%). Almost one-third, 31·4% (n = 11) of errors stemmed from one technical fault in image production. SB breaches were prevalent at the pre-treatment planning stage of the pathway. A relationship between slips/lapses and non-conformance to protocols was identified.
Conclusion:
The rate of reported radiotherapy incidents in the UK is lower when compared with this department; this could be improved with the implementation of the quality improvement plan outlined above.
Although the Vienna Convention on the Law of Treaties devotes nine articles to invalidity of treaties, cases rarely arise in practice. Circumstances covered by the Convention include violation of internal law, error, fraud, corruption, coercion and violation of a peremptory norm of international law (jus cogens). Article 46 of the Convention covers the first of these, providing that a state may not invoke the fact that its consent to be bound has been expressed in violation of its internal law unless that violation was manifest and concerned a rule of fundamental importance. The chapter examines the meaning of the key terms of this provision and possible cases in which this might arise. In the context of coercion, the chapter looks at treaties which might be concluded by the threat or use of force, peace treaties and unequal treaties. The scope of peremptory norms (jus cogens) is also discussed, together with the consequences of invalidity.
The two major sources of error are chance (random error) and confounding bias (systematic error). After correcting for these two kinds of error, one can then assess or assert causation. These are the “Three Cs.”
Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective.
Expertise is a reliable cue for accuracy – experts are often correct in their judgments and opinions. However, the opposite is not necessarily the case – ignorant judges are not guaranteed to err. Specifically, in a question with a dichotomous response option, an ignorant responder has a 50% chance of being correct. In five studies, we show that people fail to understand this, and that they overgeneralize a sound heuristic (expertise signals accuracy) to cases where it does not apply (lack of expertise does not imply error). These studies show that people 1) tend to think that the responses of an ignorant person to dichotomous-response questions are more likely to be incorrect than correct, and 2) they tend to respond the opposite of what the ignorant person responded. This research also shows that this bias is at least partially intuitive in nature, as it manifests more clearly in quick gut responses than in slow careful responses. Still, it is not completely corrected upon careful deliberation. Implications are discussed for rationality and epistemic vigilance.
This chapter explore human factors, also known as ergonomics, which is an established scientific discipline that has become integral in healthcare in recent years. The catalyst for this in the UK was the Clinical Human Factors Group led by Martin Bromiley. Martin’s wife Elaine died following errors made during a routine operation when the theatre team failed to respond appropriately to an unanticipated anaesthetic emergency in part because of a variety of human factors. There is still confusion around the term ‘human factors’. This is partly because human factors cannot be explored in isolation but need to be understood in the context of human activity, error, and the culture around error.
This chapter identifies and explains the fundamental role and responsibilities of the perioperative practitioner essential to the surgical scrub role; this includes surgical counts, sharps safety, specimen managements, and waste disposal. The scrub practitioner is a recognised member of the perioperative team, performing a crucial role in preparing the operating theatre environment for surgical procedures. They must ensure it is clean, ready, and safe to receive the surgical patient. The scrub practitioner should possess the requisite technical and non-technical skills, and theoretical underpinning knowledge of anatomy and physiology to optimally perform their role.
It is a struggle to hold society together. Historically, that task has fallen both to law and religion. Sovereignty, the source of law’s binding power, like the miracle in Carl Schmidt’s political theology, lies outside law itself. That origin coincides with the kenotic excess of the sacred.
This chapter explores that strange excess through a visual genealogy of shifting sovereign imaginaries. They range from early modern legal emblems picturing the transcendental body of the King, to modern and late modern paintings and films depicting a metaphysical shift to the sacred body of the People. The question this visual history confronts is not whether the sacred binds the nomos of law, but how? The corporeal image goes beyond conceptual abstraction. It is a site from which desire (what Freud calls the cathexis of libido) binds us to values, rituals, and institutions. Libidinal investment ties us to a shared symbolic identity; disinvestment, by contrast, invites psychic and political-legal collapse.
As the contemporary crisis in liberal democracy deepens, we ask: what sovereign imaginary will break the pall of collective anxiety and unrest, and will it come in the service of human flourishing?
The study of one type of error—the conviction of innocent people—has gained enormous importance, attracting increasing academic research and indeed giving birth to an activism geared towards obtaining the exoneration of innocent victims of unjust court convictions. One of the issues that has produced the greatest number of studies has been identifying factors that increase the probability that convictions of innocent people will occur. Among its results is the consensus that a group of "evidentiary practices" exists that may explain the errors. The present work sets out to describe, from the evidence available, the most problematic evidentiary practices in relation to the use of expert evidence. According to the empirical data available, this is one of the most relevant factors in the system’s production of wrong decisions. Based on a more refined diagnosis of which practices are most problematic in the use of this evidence, I hope to make it possible to gauge the system’s weaknesses. This will allow me to develop proposals and strategies for risk prevention and minimization. Diminishing and anticipating errors not only seems a realistic goal or a reasonable aspiration, but also an imperative for the system
Here I explicate two methodological burdens for the kind of eliminativist views about free will and moral responsibility that might threaten a prescriptive preservationist view of reactive blame. The first burden is that eliminativists must fix the skeptical spotlight, and offer at least some comparative support for their claim that the error they identify for free will and moral responsibility that threatens blame cannot be resolved by abandoning some other assumption, belief, or feature of our concept. Second, eliminativists must explicitly motivate elimination over some variety of revisionist preservation. I call this second burden the motivational challenge, and examine two possible eliminativist strategies for meeting it. The first involves appeals to gains and losses intended to directly motivate elimination, and the second involves explicit appeal to some claim about the essence of free will and moral responsibility. What both of these strategies reveal is that their success ultimately depends on thorny issues about reference and essence.
I argue that Aristotle thinks of perception as veridical, and that phantasia – as a secondary motion consequent on perception – is responsible for all sensory error. I neutralize passages where Aristotle seems to countenance misperception by defending what I call an “object-oriented reading,” which holds that though Aristotle says we can make errors about the objects of perception, he is not committed to thinking that we can perceive them erroneously, as there are faculties besides perception (including phantasia) that engage with the objects of perception. According to the object-oriented reading, apparent misperception results when a false phantasia is mistaken for a perception, something that is possible due to the similarities between perception and phantasia. Nonetheless, since the faculties are distinct, perception remains veridical. I also address how this conception of phantasia can explain Aristotle’s appeals to phantasia in contexts like memory, thought, and animal motion.
This chapter proposes a contextual approach to the history of early modern logic and method from the perspective of the late seventeenth-century debate over the relative worth of ancient and modern learning. Reflections on a modern “art of thinking” by authors on both sides of the debate contributed to the construction of an image of philosophical modernity at the high point of the Scientific Revolution, which would subsequently help fashion the self-understanding of the Enlightenment. This image was inherently polemical and pluralistic, as shown by analyzing the variety of canons of authors and by identifying the diversity of positions on the ingredients of the modern “art of thinking” in their writings. And yet the points of disagreement also reveal an underlying consensus about the core features a logic required: that an art of thinking would have to express the natural operations of the human mind, would ground the pursuit of the sciences, and would be capable of leading the mind to the discovery of genuinely novel and valuable truths about the natural world. This chapter also provides a reevaluation of the traditional canon of important thinkers who are thought to have spearheaded modernity.