To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit’, which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed.
Classical mechanics is empirically successful because the probabilistic mean values of quantum mechanical observables follow the classical equations of motion to a good approximation (Messiah 1970, 215). We examine this claim for the one-dimensional motion of a particle in a box, and extend the idea by deriving a special case of the ideal gas law in terms of the mean value of a generalized force used to define “pressure.” The examples illustrate the importance of probabilistic averaging as a method of abstracting away from the messy details of microphenomena, not only in physics, but in other sciences as well.
What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truth-related achievement of science, and there is an explanation of why common scientific practices (of trading off simplicity and fit) tend to increase predictive accuracy. Akaike's explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides a clear formulation of many open problems of research.
No matter how often billiard balls have moved when struck in the past, the next billiard ball may not move when struck. For philosophers, this ‘theoretical’ possibility of being wrong raises a problem about how to justify our theories and models of the world and their predictions. This is the problem of induction. In practice, nobody denies that the next billiard ball will move when struck, so many scientists see no practical problem. But in recent times, scientists have been presented with competing methods for comparing hypotheses or models (classical hypothesis testing, BIC, AIC, cross-validation, and so on) which do not yield the same predictions. Here there is a problem.
Model selection involves a trade-off between simplicity and fit for reasons that are now fairly well understood (see Forster and Sober, 1994, for an elementary exposition). However, there are many ways of making this trade-off, and this chapter will analyse the conditions under which one method will perform better than another. The main conclusions of the analysis are that (1) there is no method that is better than all the others under all conditions, even when some reasonable background assumptions are made, and (2) for any methods A and B, there are circumstances in which A is better than B, and there are other circumstances in which B will do better than A. Every method is fraught with some risk even in well-behaved situations in which nature is ‘uniform’. Scientists will do well to understand the risks.
Sober (1984) has considered the problem of determining the evidential support, in terms of likelihood, for a hypothesis that is incomplete in the sense of not providing a unique probability function over the event space in its domain. Causal hypotheses are typically like this because they do not specify the probability of their initial conditions. Sober's (1984) solution to this problem does not work, as will be shown by examining his own biological examples of common cause explanation. The proposed solution will lead to the conclusion, contra Sober, that common cause hypotheses explain statistical correlations and not matchings between event tokens.
There is an interesting problem concerning component causes posed by Cartwright (1983) in her book How the Laws of Physics Lie, which is easily explained in terms of a simple example. Consider a cup sitting on the table. Why doesn’t it move? The explanation given by Newtonian mechanics is that the cup is experiencing two forces-the downward force of gravity and the upward ‘elastic’ force of the table-and these two forces exactly cancel to produce a zero resultant force. This zero resultant force then produces a zero acceleration, which ‘explains’ why the cup doesn’t move. This rather simple, yet typical, example of theoretical explanation raises a surprisingly deep puzzle: What justification is there for believing in the existence of the component forces if they are redundant to the explanation? For it does not really matter what size the component forces are so long as their resultant is zero.
Skyrms's formulation of the argument against stochastic hidden variables in quantum mechanics using conditionals with chance consequences suffers from an ambiguity in its “conservation“ assumption. The strong version, which Skyrms needs, packs in a “no-rapport” assumption in addition to the weaker statement of the “experimental facts.“ On the positive side, I argue that Skyrms's proof has two unnoted virtues (not shared by previous proofs): (1) it shows that certain difficulties that arise for deterministic hidden variable theories that exploit a non-classical probability theory extend to the stochastic case; (2) the use of counterfactual conditionals relates the Bell puzzle to Dummett's (1976) discussion of realism in quantum mechanics.
Section 2 will begin by formulating Reichenbach’s principle of common cause in a more general way than is usual but in a way that makes the idea behind it a lot clearer. The way that Salmon has pushed the principle into the services of scientific realism will be explained in terms of an example, van Fraassen objects, Salmon modifies his stand and van Fraassen rejoins - all in section 2. (See van Fraassen 1980, chapter 2).
In this episode I think van Fraassen right in claiming - against Salmon that there is no categorical imperative for common cause explanation, and I add my own examples in section 3. The first example ‘is the explanation of the correlation between the equilibrium positions of two objects on a balance in terms of their property of “mass” and the law of moments.