To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Quine's essay, “Ontological Relativity”  has brought about not a little confusion and disagreement. What is Quine's doctrine, and what are his arguments for it? The following paragraphs search for an answer. First a word about my aims. I will avoid adding to the already extensive discussion of Quine's older thesis of the indeterminacy of translation. Instead, where connections between the old and new doctrines become apparent, I will focus on the connections themselves and their repercussions for ontological relativity.
Bogen and Woodward argued the indirect connection between data and theory in terms of their conception of “phenomena.” I outline and elaborate on their presentation. To illuminate the connection with contemporary thinking in terms of models, I distinguish between phenomena tokens, representations of which can be identified with data models, and phenomena types that can be identified with relatively low-lying models or aspects of models in the model hierarchy. Throughout I stress the role of idealization in these considerations.
There are few, perhaps no known, exact, true, general laws. Some of the work of generalization is carried by ceteris paribus generalizations. I suggest that many models continue such work in more complex form, with the idea of ceteris paribus conditions thought of as extended to more general conditions of application. I use the term regularity guide to refer collectively to cp-generalizations and such regularity-purveying models. Laws in the traditional sense can then be thought of as idealizations, which idealize away from the conditions of application of regularity guides. If we keep clearly in mind the status of laws as such idealizations, problems surrounding traditional topics—such as lawlikeness, corresponding counterfactuals and modality—no longer look to be intractable.
This essay endorses the conclusion of Sklar's “Dappled Theories in a Uniform World” that he announces in his abstract, that notwithstanding recent attacks foundational theories are universal in their scope. But Sklar's rejection of a “pluralist ontology” is questioned. It is concluded that so called “foundational” and “phenomenological” theories are on a much more equal footing as sources of knowledge than Sklar would allow, that “giving an ontology” generally involves dealing in idealizations, and that a transfigured “ficitonalism” provides an (in many respects) better model of scientific knowledge than the model of “foundational truths.”
This paper examines the so-called “gauge argument” sometimes used by physicists to motivate the introduction of gauge fields, here facilitated by an informal exposition of the fiber bundle formalism. The discussion suggests some preliminary ways of understanding the connection between gauge fields and interactions.
An Interpretive Introduction to Quantum Field Theory (Teller 1995; hereafter IIQFT) supersedes most of my prior work on quantum field theory. The gossip mill has described this book as a popularization of the most elementary parts of Bjorken and Drell (1965), which into the 1980s was the most widely used quantum field theory text. As with any good caricature, there is a great deal of truth in this comparison. Like Bjorken and Drell, who published in 1965, IIQFT presents the theory largely as it existed in the 1950s. But in order to see aspects of structure and interpretation more clearly, IIQFT presents the theory stripped of all the details needed for application. IIQFT also does not treat contemporary methods, such as the functional approach, and important recent developments, especially gauge theories and the renormalization group. Nonetheless it is hoped that by laying out the structure of the theory's original form in the 1950s, much of which survives in contemporary versions, and by developing a range of ways of thinking about that theory physically, one does essential ground work for a thorough understanding of what we have today.
Why do we use the term ‘quantum field theory’? A good fraction of the work done in IIQFT aims to clarify the appropriateness, accurate development, and limitations of the application of the epithet ‘field’, as well as examination of alternatives. While called a ‘field theory’, quantum field theory (QFT) is also taken to be our most basic theory of ‘elementary particles’.
Huggett and Weingard's critical review provides an opportunity to continue the interpretive examination of quantum field theory in terms of some specific issues as well as comparison of alternative approaches to the subject. This note recasts their example of inequivalent Fock spaces in an effort to further clarify what it illustrates. Questions are addressed about the role of analogy in developing quantum field theory and about the conflict between formal vs. concrete methods in both physics and its interpretation, continuing the well-known historical debate between Pierre Duhem and Clark Maxwell. Huggett and Weingard's examination very usefully occasions clarification on some points of exposition which, it is hoped, will make An Interpretive Introduction to Quantum Field Theory a more useful resource for understanding this subject.
Professor Murray Gell-Mann told us how, in 1963, in a submission to Physics Letters, he “employed the term ‘mathematical’ for quarks that would not emerge singly and ‘real’ for quarks that would.” Three years later he offered an improved “characterization of mathematical quarks by describing them in terms of the limit of an infinite potential, essentially the way confinement is regarded today. Thus what I meant by ‘mathematical’ for quarks is what is now generally thought to be both true and predicted by QCD.” But in using the term “mathematical” Professor Gell-Mann got himself into some hot water, for “up to the present, numerous authors keep stating or implying that when I wrote that quarks were likely to be ‘mathematical’ and unlikely to be ‘real,’ I meant that they somehow weren't there. Of course, I meant nothing of the kind.”
How did Gell-Mann get himself into this little predicament? “I did not want to call [confined] quarks ‘real’ because I wanted to avoid painful arguments with philosophers about the reality of permanently confined objects. In view of the widespread misunderstanding of my carefully explained notation, I should probably have ignored the philosopher problem and used different words.”
At the conference Gell-Mann told us about the doctor's prescription he kept posted in his office admonishing him not to debate philosophers, suggesting that his choice of the word “mathematical” was his effort to follow the prescription.
This paper digests technical commonplaces of quantum field theory to present an informal interpretation of the theory by emphasizing its connections with the harmonic oscillator. The resulting “harmonic oscillator interpretation” enables newcomers to the subject to get some intuitive feel for the theory. The interpretation clarifies how the theory relates to observation and to quantum mechanical problems connected with observation. Finally the interpretation moves some way towards helping us see what the theory comes to physically.
The paper also argues that, in important respects, interpretive problems of quantum field theory are problems we know well from conventional quantum mechanics. An important exception concerns extending the puzzles surrounding the superposition of properties in conventional quantum mechanics to an exactly parallel notion of superposition of particles. Conventional quantum mechanics seems incompatible with a classical notion of property on which all quantities always have definite values. Quantum field theory presents an exactly analogous problem with saying that the number of “particles” is always definite.
In quantum field theory divergent expressions are “discarded”, leaving finite expressions which provide the best predictions anywhere in science. In fact, this “renormalization procedure” involves no mystery or illegitimate operations. This paper explains, in terms accessible to non-experts, how the procedure really works and explores some different ways in which physicists have suggested that one understand it.
Previous work has shown that the problem of measurement in quantum mechanics is not correctly seen as one of understanding some allegedly univocal process of measurement in nature which corresponds to the projection postulate. The present paper introduces a new perspective by showing that how we are to understand the nature of the change of quantum mechanical state on measurement depends very sensitively on the interpretation of the state function, and by showing how attention to this dependence can greatly sharpen the problems and relations between them. In particular, the problems take a form resembling their traditional formulation only on an inexact value interpretation, according to which the state function attributes inexact values of quantities to systems. On other interpretations we can apply (with various drawbacks) the subensemble idea, according to which a discontinuous change of quantum mechanical description results on measurement simply because we need a new state function to describe a new object.
If we take the state function of quantum mechanics to describe belief states, arguments by Stairs and Friedman-Putnam show that the projection postulate may be justified as a kind of minimal change. But if the state function takes on a physical interpretation, it provides no more than what I call a fortuitous approximation of physical measurement processes, that is, an unsystematic form of approximation which should not be taken to correspond to some one univocal “measurement process” in nature. This fact suggests that the projection postulate does not provide a proper locus for interpretive investigation. Readers will also find section 3's analysis of fortuitous approximations of independent interest and presented without the perils of quantum mechanics.
Why does Bohr nowhere discuss the projection postulate? He has the courtesy to cast at least a few disparaging words at some other notions for which he has no use, such as quantum logic. But he will not even admit the projection postulate as a subject for discussion. Another way to raise the puzzle is to point out that, although Bohr has a lot to say about measurement, he won't even recognize the existence of what has come to be called the “problem of measurement”.
I have exaggerated slightly. We have one relevant comment, in the report of the discussion after Bohr's talk at the 1938 Warsaw conference:
Professor Bohr wished to say, relative to the question propounded by the president, that the duality he noticed in the interpretation of the formalism of quantum mechanics was, in his opinion, a question of choosing the most adequate description of the experiment.