To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The demise of the value-free ideal constitutes a threat to public trust in science. One proposal is that whenever making value judgments, scientists rely only on democratic values. Since the influence of democratic values on scientific claims and recommendations is legitimate, public trust in science is warranted. I challenge this proposal. Appealing to democratic values will not suffice to secure trust because of at least two obstacles: polarization and marginalization.
It is increasingly easy to acquire a large amount of data about a problem before formulating a hypothesis. The idea of exploratory data analysis (EDA) predates this situation, but many researchers find themselves appealing to EDA as an explanation of what they are doing with these new resources. Yet there has been relatively little explicit work on what EDA is or why it might be important. I canvass several positions in the literature, find them wanting, and suggest an alternative: exploratory data analysis, when done well, shows the expected value of experimentation for a particular hypothesis.
I defend a novel account of scientific progress centred around justification. Science progresses, on this account, where there is a change in justification. I consider three options for explicating this notion of change in justification. This account of scientific progress dispels with a condition for scientific progress that requires accumulation of truth or truthlikeness, and it emphasises the social nature of scientific justification.
I propose the ‘Propositional Account’ of effective quantum field theories. According to the Propositional Account, each effective quantum field theory expresses propositions about various physical items: fields, interactions, and more. In addition, two effective quantum field theories are physically equivalent just in case they express the same propositions. As I explain, the Propositional Account is scientifically naturalistic, since it invokes terms and principles from the empirical science of linguistics. And the Propositional Account avoids problems faced by other accounts of the physical contents of effective theories.
“Actionability” is a key concept in precision oncology. Its precise definition, however, remains contested. This paper undertakes a philosophical analysis of “actionability” to aid in conceptual clarification. We map distinct concepts of actionability, arguing that each is best understood as a contextually objective category articulated to mitigate risk of “conceptual slippage.” We defend “interactive pluralism,” acknowledging the need for distinct concepts but also for conceptual interaction in practice. This paper thus offers insights for both practitioners and philosophers, clarifying approaches to actionability for scientists and clinicians and also serving as a case study to test competing views on scientific pluralism.
Traditionally, the debate about health and disease is characterized as an opposition between naturalism and normativism. However, recent contributions show that theories of health and disease need not be purely naturalistic or normative, but may be located somewhere in between. The first purpose of this article is to further advance this line of nuancing. The second purpose is to argue in favor of a specific position, which the added nuances reveal. I call this position ‘subjectively salient naturalism’. If one is interested in scientific concepts of health and disease, subjectively salient naturalism is a more plausible position than naturalism.
Some scientific models and some claims about model-target relations are fruitfully diagnosed as dogwhistles. Dogwhistles, broadly speaking, are speech acts which send different, conflicting, and often differentially inflammatory messages to listeners. I distinguish two ways scientific models can be dogwhistles: representational dogwhistling and fit-for-purpose dogwhistling. I illustrate both kinds of dogwhistling using an example from computational social science, the Diversity Trumps Ability theorem. I argue that dogwhistling threatens the objectivity of science, and propose some ameliorative strategies.
In non-causal explanations, some non-causal facts (such as mathematical, modal or metaphysical) are used to explain some physical facts. However, precisely because these explanations abstract away from causal facts, they face two challenges: 1) it is not clear why would one rather than the other non-causal explanantia be relevant for the explanandum; and 2) why would standing in a particular explanatory relation (e.g., “counterfactual dependence”, “constraint”, “entailment”, “constitution”, “grounding”, and so on), and not in some other, be explanatory. I develop an explanatory relevance account which is based on erotetic constraints and show how it addresses these two challenges.
The ‘psychologist’s green thumb’ refers to the argument that an experimenter needs an indeterminate set of skills to successfully replicate an effect. This argument is sometimes invoked by psychological researchers to explain away failures of independent replication attempts of their work. In this paper, I assess the psychologist’s green thumb as a candidate explanation for individual replication failure and argue that it is potentially costly for psychology as a field. I also present other, more likely reasons for these replication failures. I conclude that appealing to a psychologist’s green thumb is not a convincing explanation for replication failure.
Behavioural welfare economics usually aims at mere means paternalism, helping agents better pursue their own goals. This paper discusses one initially promising way to inform policies addressed at agents who violate expected utility theory (EUT), namely what I call ‘CPT debiasing’. I argue that this approach is problematic even if we grant the normative authority of EUT, the descriptive adequacy of CPT (cumulative prospect theory), and the general acceptability of means paternalism. First, it is doubtful whether the CPT utility function measures what its proponents intend. Second, by imposing risk neutrality on agents the approach involves a more problematic paternalism.
I develop a novel account of how non-epistemic aims and values can appropriately influence scientific investigation. At its heart is a process of epistemic projection, in which a non-epistemic aim or value is mapped to an epistemic research problem that aligns with that aim or value. Choices in research are then justified as a means of solving that research problem. This epistemic projection approach makes research responsive to non-epistemic aims and values yet remains consistent with the value-free ideal; it could be acceptable to parties on both sides of the values-in-science debate. It also promises to be useful in practice.
Scientists may sometimes generalize from their samples to broader populations when they have not yet sufficiently supported this generalization. Do such hasty generalizations also occur in experimental philosophy? To check, we analyzed 171 experimental philosophy studies published between 2017 and 2023. We found that most studies tested only Western populations but generalized beyond them without justification. There was also no evidence that studies with broader conclusions had larger, more diverse samples, but they nonetheless had higher citation impact. Our analyses reveal important methodological limitations of many experimental philosophy studies and suggest that philosophical training may not protect against hasty generalizations.
The central role of such epistemic concepts as theory, explanation, model, or mechanism is rarely questioned in philosophy of science. Yet, what is their actual use in the practice of science? Here we deploy text-mining methods to investigate the usage of 61 epistemic notions in a corpus of full-text articles from the biological and biomedical sciences (N=73,771). The influence of disciplinary context is also examined by splitting the corpus into sub-disciplinary clusters. The results reveal the intricate semantic networks that these concepts actually form in the scientific discourse, not always following our intuitions, at least in some parts of science.
We provide five rearticulations of the thesis that the structure of spacetime is conventional, rather than empirically determined, based upon variation of the structures that are empirically underdetermined and modal contexts in which this underdetermination occurs. Three of the five formulations of conventionalism will be found to fail. Two are found to open up new interesting problems for researchers in the foundations of general relativity. In all five cases, our analysis explores the interplay between geometric identities, symmetry, conformal structure, and the dynamical content of physical theories with the conventionalism dialectic deployed as a tool of explication, clarification, and exploration.
This paper argues against general claims for the epistemic superiority of experiment over observation. It does so by dissociating the benefits traditionally attributed to experiment from physical manipulation. In place of manipulation, we argue that other features of research methods do confer epistemic advantages in comparison to methods in which they are diminished. These features better track the epistemic successes and failures of scientific research, cross-cut the observation/experiment distinction, and nevertheless explain why manipulative experiments are successful when they are.
Teleosemantic theories aim to naturalize mental representation through the use of functions, typically based on past selection processes. However, the historical dependence of these theories has faced severe criticism, leading some philosophers to develop ahistorical alternatives.
This paper presents a new dilemma for all ahistorical teleosemantic theories, focusing in particular on the theories proposed by Timothy Schroeder and Bence Nanay. These theories require certain dispositions in the producers or consumers of mental representations. But the appeal to dispositions puts the proponents in an undesirable position: mental content is either overly dependent on current circumstances or ultimately dependent on historical factors.
This note presents a short reply to Henderson’s critical discussion of Schurz’s approach to the problem of induction based on the optimality of meta-induction. Henderson objects that the meta-inductive a posteriori justification of object-induction rests on a certain premise, namely an approximation condition, that she reveals as untenable. I reply that Henderson’s approximation condition is indeed too strong to be plausible, but it is not needed by the meta-inductive approach; a much weaker and highly plausible approximation condition is sufficient.
Basal cognition investigates cognition working upwards from non-neuronal organisms. Since it is committed to empirically testable hypotheses, a methodological challenge arises: how can experiments avoid using zoo-centric assumptions that ignore the ecological contexts that might elicit cognitively driven behaviour in non-neuronal organisms? To meet this challenge, I articulate the Principle of Dynamic Holism (PDH), a methodological principle for guiding research on non-neuronal cognition. PDH’s relation to holistic research programmes in human-focused cognitive science and psychology is described and then an argument from analogy based on holistic developmental biology is presented. Lastly, two experiments exemplifying the need for PDH are examined.
Recent advocates of “field philosophy” make the case that philosophy “needs to get outside more often”; alongside disciplinary modes of practice we should cultivate philosophical work that is “practically engaged, stakeholder-centered, and timely” (Frodeman and Briggle 2016). As illustrated by The Guide to Field Philosophy (2020), this takes a great many different forms. I draw on three examples of field-engaged philosophy of science that address the legacies of settler-colonialism in a field science, archaeology, to illustrate the promise of field philosophy in relation to a framework for analyzing “broadly engaged philosophy of science” proposed by Plaisance and Elliott (2021).
I propose an account of the model-based structure of the present-day high energy physics experiments in which the relations among the theoretical, experimental and simulation models constitute a non-linear structure that is akin to a network of models (NoM). I argue that while the proposed NoM subsumes Suppes’ hierarchy of models (HoM) as the model-based characterization of the inference leading from the data to the validity or invalidity of the hypothesis tested in an experiment, it involves a model-based characterization of the inference leading from the collision of particles to the acquisition of data, which is missing in Suppes’ HoM.