To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We reassess Woodward’s counterfactual account of explanation in relation to regularity explananda. Woodward presents an account of causal explanation. We argue, by using an explanation of Kleiber’s law to illustrate, that the account can also cover some noncausal explanations. This leads to a tension between the two key aspects of Woodward’s account: the counterfactual aspect and the causal aspect. We explore this tension and make a case for jettisoning the causal aspect as constitutive of explanatory power in connection with regularity explananda.
According to Woodward’s causal model of explanation, explanatory information is relevant for manipulation purposes and indicates by means of invariant causal relations how to change the value of certain target explanandum variables by intervening on others. Therefore, the depth of an explanation is evaluated through the size of the domain of invariance of the generalization involved. In this article, I argue that Woodward’s account of explanatory relevance is still unsatisfactory and claim that the depth of an explanation should be explicated in terms of the size of the domain of circumstances which it designates as leaving the explanandum unchanged.
Evolution is often characterized as a tinkerer creating efficient but messy solutions. We analyze the nature of the problems that arise when trying to explain and understand cognitive phenomena created by this haphazard design process. We present a theory of explanation and understanding and apply it to a case problem—solutions generated by genetic algorithms. By analyzing the nature of solutions that genetic algorithms present to computational problems, we show, first, that evolutionary designs are often hard to understand because they exhibit nonmodular functionality and, second, that breaches of modularity wreak havoc on our strategies of causal and constitutive explanation.
Advocates of the counterfactual approach to causal inference argue that race is not a cause, and this despite the fact that it is commonly treated as such by scientists in many disciplines. I object that their argument is unsound since two of its premises are false. I also sketch an argument to the effect that racial discrimination cannot be explained unless one assumes race to be a cause.
The manipulationist account of causation provides a framework for assessing causal claims and the experiments used to test them. But its pertinence to the more general class of scientific experiments—particularly those experiments not explicitly designed for testing causal claims—is less clear. I aim to show (1) that the set of causal inferences afforded by any experiment is determined solely on the basis of contrasting case structures that I call “experimental series” and (2) that the conditions that suffice for causal inference obtain quite commonly, even among “ordinary” scientific experiments not explicitly designed for the testing of causal claims.
I present three reasons why philosophers of science should be more concerned about violations of causal faithfulness (CF). In complex evolved systems, mechanisms for maintaining equilibrium states are highly likely to violate CF. Even when such systems do not precisely violate CF, they may nevertheless generate precisely the same problems for inferring causal structure from probabilistic relationships in data as do genuine CF violations. Thus, potential CF violations are particularly germane to experimental science when we rely on probabilistic information to uncover causal structures since we cannot then use those structures to predict the right experiments to ‘catch out’ hidden causal relationships.
Using a variety of different results from the literature, I show how causal discovery with experiments is limited unless substantive assumptions about the underlying causal structure are made. These results undermine the view that experiments, such as randomized controlled trials, can independently provide a gold standard for causal discovery. Moreover, I present a concrete example in which causal underdetermination persists despite exhaustive experimentation and argue that such cases undermine the appeal of an interventionist account of causation as its dependence on other assumptions is not spelled out.
This article introduces the notion of a kind of inference called black box measurement and argues that it is both historically and philosophically significant. Thinking about certain classic cases of underdetermination using this notion can give us a better understanding of how these cases are resolved. I take the main philosophical problem of black box measurement to be the justification of assumptions that are needed in order to make these measurements. I sketch some ways in which such enabling assumptions might be justified.
Debiasing procedures are experimental methods aimed at correcting errors arising from the cognitive biases of the experimenter. We discuss two of these methods, the predesignation rule and randomization, showing to what extent they are open to the experimenter’s regress: there is no metarule to prove that, after implementing the procedure, the experimental data are actually free from biases. We claim that, from a contractarian perspective, these procedures are nonetheless defensible since they provide a warrant of the impartiality of the experiment: we only need proof that the result has not been intentionally manipulated for prima facie acceptance.
Agreement between “independent” measurements of a theoretically posited quantity is intuitively compelling evidence that a theory is, loosely speaking, on the right track. But exactly what conclusion is warranted by such agreement? I propose a new account of the phenomenon’s epistemic significance within the framework of Bayesian epistemology. I contrast my proposal with the standard Bayesian treatment, which lumps the phenomenon under the heading of “evidential diversity.”
Testing a point null hypothesis is a classical but controversial issue in statistical methodology. A prominent illustration is Lindley’s Paradox, which emerges in hypothesis tests with large sample size and exposes a salient divergence between Bayesian and frequentist inference. A close analysis of the paradox reveals that both Bayesians and frequentists fail to satisfactorily resolve it. As an alternative, I suggest Bernardo’s Bayesian Reference Criterion: (i) it targets the predictive performance of the null hypothesis in future experiments; (ii) it provides a proper decision-theoretic model for testing a point null hypothesis; (iii) it convincingly addresses Lindley’s Paradox.
Formal Epistemology, Decision Theory, and Game Theory
An evolutionary basis for Bayesian rationality is suggested, by considering how natural selection would operate on an organism’s ‘policy’ for choosing an action depending on an environmental signal. It is shown that the evolutionarily optimal policy, as judged by the criterion of maximal expected reproductive output, is the policy that, for each signal, picks an action that maximizes conditional expected output given that signal. This suggests a possible route by which Bayes-rational creatures might have evolved.
Does the strength of a particular belief depend upon the significance we attach to it? Do we move from one context to another, remaining in the same doxastic state concerning p yet holding a stronger belief that p in one context than in the other? For that to be so, a doxastic state must have a certain sort of context-sensitive complexity. So the question is about the nature of belief states, as we understand them, or as we think a theory should model them. I explore the idea and how it relates to work on imprecise probabilities and second-order confidence.
Multiarm bandit problems have been used to model the selection of competing scientific theories by boundedly rational agents. In this article, I define a variable-arm bandit problem, which allows the set of scientific theories to vary over time. I show that Roth-Erev reinforcement learning, which solves multiarm bandit problems in the limit, cannot solve this problem in a reasonable time. However, social learning via preferential attachment combined with individual reinforcement learning, which discounts the past, does.
Game theory is often used to explain behavior. Such explanations often proceed by demonstrating that the behavior in question is a Nash equilibrium. Agents are in Nash equilibrium if each agent’s strategy maximizes her payoff given her opponents’ strategies. Nash equilibriums are fundamentally static, but it is usually assumed that equilibriums will be the outcome of a dynamic process of learning or evolution. This article demonstrates that, even in the most simple setting, this need not be true. In two-strategy games with just a single equilibrium, a family of imitative learning dynamics does not lead to equilibrium.
Traditionally, cognitive values have been thought of as a collective pool of considerations in science that frequently trade against each other. I argue here that a finer-grained account of the value of cognitive values can help reduce such tensions. I separate the values into groups, minimal epistemic criteria, pragmatic considerations, and genuine epistemic assurance, based in part on the distinction between values that describe theories per se and values that describe theory-evidence relationships. This allows us to clarify why these values are central to science and what role they should play, while reducing the tensions among them.
We argue that the analysis of cognitive attitudes should play a central role in developing more sophisticated accounts of the proper roles for values in science. First, we show that the major recent efforts to delineate appropriate roles for values in science would be strengthened by making clearer distinctions among cognitive attitudes. Next, we turn to a specific example and argue that a more careful account of the distinction between the attitudes of belief and acceptance can contribute to a better understanding of the proper roles for values in a case study from paleoanthropology.
The argument from inductive risk attempts to show that practical and ethical costs of errors should influence standards of evidence for accepting scientific claims. A common objection charges that this argument presupposes a behavioral theory of acceptance that is inappropriate for science. I respond by showing that the argument from inductive risk is supported by a nonbehavioral theory of acceptance developed by Cohen, which defines acceptance in terms of premising. Moreover, I argue that theories designed to explain how acceptance can be guided exclusively by epistemic values suffer from difficulties that do not afflict Cohen’s theory.
Proponents of the value ladenness of science rely primarily on arguments from underdetermination or inductive risk, which share the premise that we should only consider values where the evidence runs out or leaves uncertainty; they adopt a criterion of lexical priority of evidence over values. The motivation behind lexical priority is to avoid reaching conclusions on the basis of wishful thinking rather than good evidence. This is a real concern, however, that giving lexical priority to evidential considerations over values is a mistake and unnecessary for avoiding the wishful thinking. Values have a deeper role to play in science.