To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 1, I discuss scientism, or the belief that scientific knowledge is the most (or only) legitimate form of knowledge. My argument is that scientism mischaracterizes science as an epistemically privileged and unified method. Actual scientific practice, however, has demonstrated tremendous variability in theory and method across time, place, and disciplinary context, and can best be understood as a historically contingent human activity. Drawing on historical and philosophical critiques, I characterize scientism as a kind of science fundamentalism that insulates scientists from social and moral critique and so contributes to the institutionalization of exceptionalism and privilege. I discuss elements of scientism in professional psychology, using historical and contemporary examples to show that it is a common (perhaps even a majority) position.
Good Science is an account of psychological research emphasizing the moral foundations of inquiry. This volume brings together existing disciplinary critiques of scientism, objectivism, and instrumentalism, and then discusses how these contribute to institutionalized privilege and to less morally responsive research practices. The author draws on historical, critical, feminist, and science studies traditions to provide an alternative account of psychological science and to highlight the irreducibly moral foundations of everyday scientific practice. This work outlines a theoretical framework for thinking about and practicing psychology in ways that center moral responsibility, collective commitment, and justice. The book then applies this framework, describing psychological research practices in terms of the their moral dilemmas. Also included are materials meant to aid in methods instruction and mentoring.
Thomas Kuhn's Structures of Scientific Revolutions provides a way to think of the development of the natural sciences in the Muslim world as differing from the path taken in Europe while also reflecting broad engagement with the sciences. This excursus argues for a weak version of Kuhn's paradigm to better understand the divergent fate of the natural sciences in the Muslim world.
This Introduction provides a rationale for a collection of new paper on Thomas Kuhn. Scholarship on Kuhn has changed dramatically in the last 20 years for numerous reasons. First, scholars studying Kuhn no longer focus narrowly on Structure of Scientific Revolutions. Scholars have been giving careful consideration to Kuhn’s later work. Second, many scholars have been drawing on the vast unpublished resources at the Thomas S. Kuhn Archive at MIT. Third, with the 50th anniversary of the publication of Structure in 2012, there were quite a number of conferences held which led to the publication of a number of volumes reflecting on Kuhn’s impact in philosophy and history of science. These three developments have contributed significantly to our collective understanding of Kuhn and his theories of scientific change and scientific knowledge. Though Kuhn’s position in the philosophy of science can be difficult to gauge, by objective measures Kuhn’s impact is undeniable. Kuhn’s influence outside philosophy of science is also astounding, especially in the social sciences. So it is beyond dispute that Kuhn has had a profound and wide-ranging impact on scholarship.
Objectivity is a key concept both in how we talk about science in everyday life and in the philosophy of science. This Element explores various ways in which recent philosophers of science have thought about the nature, value and achievability of objectivity. The first section explains the general trend in recent philosophy of science away from a notion of objectivity as a 'view from nowhere' to a focus on the relationship between objectivity and trust. Section 2 discusses the relationship between objectivity and recent arguments attacking the viability or desirability of 'value free' science. Section 3 outlines Longino's influential 'social' account of objectivity, suggesting some worries about drawing too strong a link between epistemic and ethical virtues. Section 4 turns to the value of objectivity, exploring concerns that notions of objectivity are politically problematic, and cautiously advocating in response a view of objectivity in terms of invariance.
Much social science still fails to employ a now standard philosophical and scientific conceptualization of levels of analysis. In so doing, it still largely avoids the multilevel reasoning appropriate for social science, instead maintaining dogmatic attachments to one or another level of analysis, especially an individual-level one.
This chapter examines Cassirer's view on contemporary science. It revisits Cassirer's lesser-known work Determinism and Indeterminism in Modern Physics and argues that it harbors a significantly new stage of his philosophy of physical science. On the one hand, this work presents the quantum formalism as a limiting pole of the Bedeutungsfunktion, the highest mode of symbolic formation according to Cassirer’s “phenomenology of cognition.” Inspired by Paul Dirac, Cassirer understands quantum mechanics as a symbolic calculus for deriving probabilistic predictions of measurement outcomes without regard to underlying wave or particle “images” – or, as an exemplar of abstract symbolic thought. On the other hand, Cassirer recognizes the philosophical significance of the use of group theory in quantum mechanics as advancing a purely structural concept of object in physics. Hence, Ryckman reveals that Cassirer drew epistemological consequences from the symbolic character of contemporary physical theory that retain relevance for philosophy of science today.
Bernard Williams argues that philosophy is in some deep way akin to history. This article is a novel exploration and defense of the Williams thesis (as I call it)—though in a way anathema to Williams himself. The key idea is to apply a central moral from what is sometimes called the analytic philosophy of history of the 1960s to the philosophy of philosophy of today, namely, the separation of explanation and laws. I suggest that an account of causal explanation offered by David Lewis may be modified to bring out the way in which this moral applies to philosophy, and so to defend the Williams thesis. I discuss in detail the consequences of the thesis for the issue of philosophical progress and note also several further implications: for the larger context of contemporary metaphilosophy, for the relation of philosophy to other subjects, and for explaining, or explaining away, the belief that success in philosophy requires a field-specific ability or brilliance.
Bayesian confirmation theory is our best formal framework for describing inductive reasoning. The problem of old evidence is a particularly difficult one for confirmation theory, because it suggests that this framework fails to account for central and important cases of inductive reasoning and scientific inference. I show that we can appeal to the fragmentation of doxastic states to solve this problem for confirmation theory. This fragmentation solution is independently well-motivated because of the success of fragmentation in solving other problems. I also argue that the fragmentation solution is preferable to other solutions to the problem of old evidence. These other solutions are already committed to something like fragmentation, but suffer from difficulties due to their additional commitments. If these arguments are successful, Bayesian confirmation theory is saved from the problem of old evidence, and the argument for fragmentation is bolstered by its ability to solve yet another problem.
Unity of science was once a very popular idea among both philosophers and scientists. But it has fallen out of fashion, largely because of its association with reductionism and the challenge from multiple realisation. Pluralism and the disunity of science are the new norm, and higher-level natural kinds and special science laws are considered to have an important role in scientific practice. What kind of reductionism does multiple realisability challenge? What does it take to reduce one phenomenon to another? How do we determine which kinds are natural? What is the ontological basis of unity? In this Element, Tuomas Tahko examines these questions from a contemporary perspective, after a historical overview. The upshot is that there is still value in the idea of a unity of science. We can combine a modest sense of unity with pluralism and give an ontological analysis of unity in terms of natural kind monism.
In an era of corporate surveillance, artificial intelligence, deep fakes, genetic modification, automation, and more, law often seems to take a back seat to rampant technological change. To listen to Silicon Valley barons, there's nothing any of us can do about it. In this riveting work, Joshua A. T. Fairfield calls their bluff. He provides a fresh look at law, at what it actually is, how it works, and how we can create the kind of laws that help humans thrive in the face of technological change. He shows that law can keep up with technology because law is a kind of technology - a social technology built by humans out of cooperative fictions like firms, nations, and money. However, to secure the benefits of changing technology for all of us, we need a new kind of law, one that reflects our evolving understanding of how humans use language to cooperate.
Most theories and hypotheses in psychology are verbal in nature, yet their evaluation overwhelmingly relies on inferential statistical procedures. The validity of the move from qualitative to quantitative analysis depends on the verbal and statistical expressions of a hypothesis being closely aligned—that is, that the two must refer to roughly the same set of hypothetical observations. Here I argue that many applications of statistical inference in psychology fail to meet this basic condition. Focusing on the most widely used class of model in psychology—the linear mixed model—I explore the consequences of failing to statistically operationalize verbal hypotheses in a way that respects researchers' actual generalization intentions. I demonstrate that whereas the "random effect" formalism is used pervasively in psychology to model inter-subject variability, few researchers accord the same treatment to other variables they clearly intend to generalize over (e.g., stimuli, tasks, or research sites). The under-specification of random effects imposes far stronger constraints on the generalizability of results than most researchers appreciate. Ignoring these constraints can dramatically inflate false positive rates, and often leads researchers to draw sweeping verbal generalizations that lack a meaningful connection to the statistical quantities they are putatively based on. I argue that failure to take the alignment between verbal and statistical expressions seriously lies at the heart of many of psychology's ongoing problems (e.g., the replication crisis), and conclude with a discussion of several potential avenues for improvement.
This Element has two main aims. The first one (sections 1-7) is an historically informed review of the philosophy of probability. It describes recent historiography, lays out the distinction between subjective and objective notions, and concludes by applying the historical lessons to the main interpretations of probability. The second aim (sections 8-13) focuses entirely on objective probability, and advances a number of novel theses regarding its role in scientific practice. A distinction is drawn between traditional attempts to interpret chance, and a novel methodological study of its application. A radical form of pluralism is then introduced, advocating a tripartite distinction between propensities, probabilities and frequencies. Finally, a distinction is drawn between two different applications of chance in statistical modelling which, it is argued, vindicates the overall methodological approach. The ensuing conception of objective probability in practice is the 'complex nexus of chance'.
The global loss of biodiversity is one of the most important challenges facing humanity, and a multi-faceted strategy is needed to address the size and complexity of this problem. This paper draws on scholarship from the philosophy of science and environmental ethics to help address one aspect of this challenge: namely, the question of how to frame biodiversity loss in a compelling manner. The paper shows that the concept of biodiversity, like many scientific concepts, is value-laden in the sense that it tends to support some ethical or social values over others. Specifically, in comparison with other potential concepts, the biodiversity concept is tied more closely to the notion that nature has intrinsic value than to the idea that nature is valuable instrumentally or relationally. Thus, alternative concepts could prove helpful for communicating about biodiversity loss with those who emphasize different value systems. The paper briefly discusses five concepts that illustrate the potential for using different concepts in different contexts. Going forward, conservationists would do well to recognize the values embedded in their language choices and work with social scientists to develop a suite of concepts that can motivate the broadest swath of people to promote conservation.
In this reply to Miles Evers, I clarify some of my positions and argue that social facts should not be reified. Just as with norms, they should be defined as arrangements of practices rather than as social objects.
Scientists and philosophers of science are most impressed by theories that make successful, novel predictions: that predict surprising facts in advance of their experimental or observational confirmation. There is a theory of cosmology that has repeatedly been successful in this privileged way, but it is not the standard, or 𝚲CDM, model. It is Mordehai Milgrom’s MOND theory (MOdified Newtonian Dynamics). Unlike the standard model, MOND does not postulate the existence of dark matter. Observations that are explained in the standard model by invoking dark matter are explained in MOND by postulating a change in the laws of gravity and motion.
Scientific epistemology begins from the idea that the truth of a universal statement, such as a scientific law, can never be conclusively proved. No matter how successful a hypothesis has been in the past, it can always turn out to make incorrect predictions when applied in a new situation. Karl Popper argued that the most important experimental results are those that falsify a theory, and he proposed falsifiability as a criterion for distinguishing science from pseudoscience. Popper argued in addition that scientists should respond to falsifications in a particular way: not by ad hoc adjustments of their theories, but in a way that expands the theory’s explanatory content. Popper argued that the success of a modified theory should be judged in terms of its success at making new predictions. Popper’s view of epistemology, which is shared by many scientists and philosophers of science, is called “critical rationalism.” An epistemology that judges success purely in terms of a theory’s success at explaining known facts is called “verificationism.” Popper argued that verificationism is equivalent to a belief in induction, and that induction is a fallacy.
Dark matter is a fundamental component of the standard cosmological model, but in spite of four decades of increasingly sensitive searches, no-one has yet detected a single dark-matter particle in the laboratory. An alternative cosmological paradigm exists: MOND (Modified Newtonian Dynamics). Observations explained in the standard model by postulating dark matter are described in MOND by proposing a modification of Newton's laws of motion. Both MOND and the standard model have had successes and failures – but only MOND has repeatedly predicted observational facts in advance of their discovery. In this volume, David Merritt outlines why such predictions are considered by many philosophers of science to be the 'gold standard' when it comes to judging a theory's validity. In a world where the standard model receives most attention, the author applies criteria from the philosophy of science to assess, in a systematic way, the viability of this alternative cosmological paradigm.
The new mechanistic philosophy is divided into two largely disconnected projects. One deals with a metaphysical inquiry into how mechanisms relate to issues such as causation, capacities and levels of organization, while the other deals with epistemic issues related to the discovery of mechanisms and the intelligibility of mechanistic representations. Tudor Baetu explores and explains these projects, and shows how the gap between them can be bridged. His proposed account is compatible both with the assumptions and practices of experimental design in biological research, and with scientifically accepted interpretations of experimental results.
Electrical measuring tools now epitomise ‘black-boxed’ technologies. Since the second half of the nineteenth century, ammeters and voltmeters have been developed that the user could apparently simply connect up to their electrical circuitry and read off a number giving them a measure of current or voltage. In this paper we illustrate what can be learnt by getting inside such black boxes. Museum collections, in particular, constitute tangible traces of how what is now black-boxed has developed. By analysing instruments carefully, in particular galvanometers, this chapter interrogates the craft of both instrument maker and user, some of the different types of user whose practices are embodied in the instruments, and the lessons we can learn from a close look at instruments and collecting practices.