To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Studies have shown that US college and university professors are disproportionately left-leaning and Democratic and these tendencies are especially pronounced in the social sciences. Critics of this ideological homogeneity have leveled a wide range of charges in light of these findings: that these political orientations seep into research and teaching, that it affects accumulated knowledge, policymaking, student attitudes, American political culture, and that it promotes motivated reasoning, bias, and groupthink. This chapter reviews the most credible of the arguments for greater ideological diversity and attempts to move beyond applied concerns by asking whether and how discussion of political diversity and bias in academia might be reconceptualized to form the basis for meaningful empirical studies.
We agree with Duarte et al. that it is worthwhile to study professions' political alignments. But we have seen no evidence to support the idea that social science fields with more politically diverse workforces generally produce better research. We also think that when considering ideological balance, it is useful to place social psychology within a larger context of the prevailing ideologies of other influential groups within society, such as military officers, journalists, and business executives.
A common way to classify duties has been to distinguish between positive (duties that prescribe certain actions) and negative (duties that proscribe actions). “Don't kill” is an example of a negative duty, while “Do right by others” is an example of a positive one. Negative duties, which must be observed absolutely, have often been called “strict,” while their counterparts are called “broad.” The prohibition against killing is absolute and therefore a strict duty, whereas one has a bit more latitude when it comes to positive duties. There are many ways of being charitable, for example, and one can be devoted to a greater or lesser extent.
There's some justification for this distinction, but its importance shouldn't be exaggerated. Clearly, some duties are positive and others negative; but no duty is less obligatory than any other, despite what popular sayings tell us. According to one such saying, “Justice is our most fundamental duty, and anyone who wishes to remain within the bounds of humanity must be faithful to it. Charity, by contrast, is a luxury we're not obliged to give.” Another saying holds that “in rendering to each that which is his due, we do only what we must. If we go beyond this, we deserve special credit, because charity is less required of us than are other duties.” On this view, negative duties are more obligatory, whereas positive duties are more meritorious.
Although many people think otherwise, positive duties are as obligatory as any.
Each man is enclosed within himself. None of us, as Leibniz said of the monad, has windows onto the rest of the world. So how do we communicate with one another? By means of external phenomena called signs.
A system of signs is called a language, which can be made up not just of words but of any signs whatever. Although the term “language” is customarily used to refer to spoken words, in our definition the signs of deaf mutes would also qualify.
Scholars often distinguish between natural and artificial signs. The first arise spontaneously, without reflection, while the second develop slowly and are the result of reflection, meditation, and progress. This distinction doesn't lack foundation. Some signs are established deliberately by human will, while others have an instinctual origin. But it's important to pay close attention to the meaning of the word “natural.” Some signs are natural in the sense that they involve spontaneous behavior that, much later in our development, serves to communicate our thoughts. A child laughs if he's happy, for example, and does so spontaneously. Yet if he sees others laugh or cry, he doesn't consider this a sign of joy or happiness, for experience teaches him this only later.
But some have argued that there are natural signs in the proper sense of the word – that for children laughter and crying function as signs and are taken as such even before experience intervenes. Seeing a smiling person approach, doesn't a child himself smile?
Strictly speaking, sensation (as we've defined it) refers only to experiences of pleasure and pain. But the self also makes certain movements related to pleasure and pain – if an object causes pleasure, for example, we tend to move toward it, or in the opposite case away from it. In actuality, these movements fall more in the domain of activity than sensibility per se, but they're so closely related to sensibility that it's really quite impossible to separate them.
The tendency of the self to move in the direction of an agreeable object is called an inclination, and this definition also gives us a classification – that is, there are as many different types of inclinations as there are types of objects leading to such movements. Of these, there are three major classes: the self; other selves (our peers); and finally, certain ideas or conceptions of the mind, like the good or the beautiful. This also yields three types of inclinations: egoistic, altruistic, and higher.
The self is the object of egoistic inclinations, which are of two types. Some are purely conservative and try to keep things as they are, while others are acquisitive and seek to augment being. To conserve and to augment being are the two tendencies of nature. The first kind of inclination is called the instinct for conservation – the love of life. Whatever occurs, we find life dear, clinging to it even if it brings more pain than pleasure.
The great philosophers have tended to rely on one of two mutually exclusive approaches to the study of ethics – the first entirely empirical, the second entirely a priori. Epicurus, Mill, and Spencer took an empirical approach, while Kant took an a priori approach. The first begins with observation and proceeds by way of generalization and induction, reaching the pinnacle of its development with Mill. It consists in observing man, either when he's alone or with others; noting the circumstances in which he's happy; and then deriving the moral law by generalizing from these findings. Kant, by contrast, begins with the abstract concept of pure morality, assumes that the will is capable of acting independently of sensibility, and then asks what the law of this will must be.
But empiricism, no matter what degree of generalization it's able to attain, can never achieve the universality that characterizes the moral law. All empiricism can formulate are local and provisional rules, good only for a certain time and a certain number of individuals. Conversely, despite the fact that Kant made many concessions and scaled back the rigor of his initial formulas, his ethics remained imaginary, providing us with rules that an ideal and hypothetical being should follow – not man as he is.
The method we've followed, in contrast to empiricism and apriorism, is both deductive and experimental. We began by postulating a fact of experience – moral responsibility.
A definition is a proposition that tries to make a thing's nature clear to us. The terms of this proposition must be transposable without requiring a change in either quality or quantity. In definitions, in other words, extension and comprehension must be equivalent in the subject and attribute, as in: “Every man is a two-handed mammal = every two-handed mammal is a man.”
It's often said that there are two kinds of definitions – of things and of words.
Definitions of things reveal their nature, while those of words reveal their meanings. The Port-Royal logicians insisted that the difference between these two kinds of definitions is so great that each follows its own laws. Where definitions of words are arbitrary and nominal in the sense that a word might be given any definition whatever, definitions of things try to explicate the nature of real objects and thus can't be arbitrary. Definitions of words are incontestable, whereas definitions of things can be false and subject to debate.
But is this distinction valid? It doesn't seem so to us. Whenever we define something, whether it be a thing or a word, we're expressing its idea in terms of a proposition. Here's a definition: “Geometry is the science of sizes.” Now, how could we accept that this definition, as a definition of things, is so different from what it would become if we substituted some other word for “geometry”?
So far, we've examined the three faculties of perception and the three faculties of conception. Next we must examine attention, comparison, abstraction, generalization, judgment, and reasoning.
Attention is the faculty that allows the mind to concentrate on a particular object. Condillac argued that attention is but another word for an intense sensation, but this confuses the conditions of the phenomenon with the phenomenon itself. We often ignore an object unless it is striking, of course, but sensations are effects that the mind passively receives from things, while attention is by nature fundamentally active. So we shouldn't confuse the two. Moreover, strong sensations often result from the application of attention. When an object strikes us, we pay attention to it, and the sensation grows stronger and stronger. For these reasons, Condillac's theory is unacceptable.
What most distinguishes attention is that it's the work of our will. Attention takes two forms. In the first, it's the object that attracts the mind, the will intervening hardly at all, while in the second, attention is wholly voluntary as we direct our mind toward the object. In the first form, where attention is barely voluntary, the mind doesn't exercise much control. It's the spectacle of the object that commands our attention and keeps us from turning away. Obsession – a variety of attention in which the mind has difficulty shaking itself loose – is precisely the same phenomenon as it occurs in our inner life.
Serious objections have been raised to the idea that we have free will.
Several systems of thought have claimed that man isn't free, that everything he does follows well-determined laws. Hence the name “determinism.” Fatalism and determinism have often (and mistakenly) been confused. Fatalism assumes that all beings depend on a higher will that's omnipotent but also arbitrary and capricious. This assumption lay behind the ancient notion of fatum, or fate, as well as the Mohammedan notion of destiny. But fatalism has since fallen away, and we needn't refute it here.
The key argument of determinism is the irreconcilability of free choice and the principle of causality. Some determinists, wanting to demonstrate this alleged irreconcilability without leaving the world of inner experience, have tried to identify fixed psychological laws that govern our actions. Others have pointed to the contradiction between the principle of causality, as used in science, and the principle of freedom.
Today we'll discuss psychological determinism.
Here's an action: I go outside. Why? Because my health requires me to exercise, or because there's some task I must perform. These are the causes of my action, the motives that lead me to it. And because my action has a cause, it's not free. Freedom is only an illusion.
Determinists go on to pose the following dilemma:
either the act we thought free was actually caused by a motive and thus wasn't free; or
it didn't have any cause at all – which violates the principle of causality.
Induction is the form of reasoning that allows us to move from the particular to the general, or from facts to laws. Laws state causal relationships between two or more observed facts. So there are two steps to every induction:
We seek out a causal relationship between two facts.
This relationship identified, we extend it to all empirical cases where it might apply.
Here's an example of an induction where these two steps can easily be distinguished: Pascal wanted to determine the cause of fluctuations in the column of mercury in a barometric tube.
First Step. Pascal noted that, in a certain number of cases, the cause of the fluctuations is the weight of the air. In other words, he discovered a law that governs the phenomenon in the cases he's observed. A causal relationship has been established.
Second Step. This relationship – which has been observed in a certain number of cases – is then extended to all possible cases, and Pascal asserts the general claim that the cause of variation in the height of the barometric column is variation in the weight of the atmosphere.
In the first step, Pascal sought to identify a causal relationship. How can such relationships be determined?
Mill, in his Logic, gave four methods for doing so – concordance, difference, concomitant variation, and residues.
There are two distinct parts to mathematics, as there are to all the sciences. The truths that together compose mathematics must first be invented and then demonstrated. Consistent with this, there are two parts to the method of mathematics: one pertaining to invention, the other to demonstration.
It might seem at first glance that invention has no place in mathematics, for in mathematics truths are all deduced from one another. But there's a difference between geometry as taught and geometry as practiced. Once a theorem has been found, of course, the way to demonstrate it is to tie it to another that's previously been demonstrated. But first the theorem has to be found, and thus demonstration presupposes invention. What's the basis for the faculty of invention? The answer is – imagination. Those who invent are endowed with the gift of imagination, while others try to understand and develop their inventions. There is no fixed rule for the use of the imagination. Only one is imposed on the inventor – to submit his discovered proposition to a rigorous verification.
Invention represents the synthetic part of the mathematical sciences. But to demonstrate propositions once they've been found, they must be tied to previously demonstrated truths by means of the laws of deductive reasoning. Mathematical demonstration is carried out with the aid of definitions, axioms, and deduction.
Definitions are the material of the demonstration, which merely develops whatever is contained in the definition.
Axioms are the regulative principles of mathematical reasoning.
A method is a set of procedures the human mind follows in order to arrive at truth. These procedures differ depending on the object of study, so each type of science has its own method.
Let's begin by examining the different procedures the mind follows in order to arrive at truth.
There are two general procedures – analysis and synthesis. We'll have to define these words clearly, for they're often given different meanings.
For Condillac, analysis is the method followed by the mind when it breaks down a whole into its parts. Synthesis, by contrast, is the procedure of recomposition. When I dismantle something, I can be said to be analyzing it, and when I restore it to what it was previously, I'm synthesizing.
The Port-Royal logicians, however, gave these words a completely different meaning. For them, analysis is a regressive procedure that examines the conditions of a proposition until it arrives at something true. Synthesis is the inverse, as it begins with the proposition at which analysis arrived and ends at the proposition from which analysis began.
This definition was taken from geometry, which defines the two words in this way. For the Port-Royal school, analysis finds new truths, while synthesis proves to others what we already know to be true.
In the search for truth, the inventor follows the analytic method, while the synthetic method is – according to an expression of Port-Royal – one “of doctrine.”
We now know the object of psychology as well as its method. It's time to apply the method to the object.
This object, as we've seen, is to enumerate, describe, and classify the states of consciousness. But this should be done methodically, so we'll divide the states of consciousness into a certain number of classes and examine each more closely. We won't let ourselves be discouraged by the apparent diversity of states of consciousness but rather will search for the common characteristics that might serve as the foundation for such a division. There'll be as many faculties of the soul as there are perceptible classes.
A faculty is a specific mode of conscious activity, and there are as many different faculties as there are forms of the inner life. The soul has faculties in the same sense that inorganic bodies have properties and that complex living bodies have functions. The only difference is that a faculty refers to a larger sum of activity than a function and that a function refers to a larger sum of activity than a property.
How many faculties (or classes of states of consciousness) can be identified? There are three:
Activity: We act on the external world through the intermediary of our bodies and on the inner world through simple will, by directing our intelligence, exercising our thought, etc.
What's the meaning of the word “God”? While it's been given many different meanings, all refer to a being who's superior to ordinary human beings. But so vague a definition won't do. In our view, God is the absolute, that which exists in and by itself, outside of any relationship with anything else. If God exists, He's a being not limited by any other, determined by nothing outside Himself, completely and perfectly self-sufficient. So to ask if God exists is to ask what reason we have for believing in the existence of the absolute.
Of course, many arguments – sometimes divided into a priori and a posteriori – have been advanced to prove God's existence. But this is too unequal a division, for the vast majority of these proofs are a posteriori. Others have distinguished between proofs that are metaphysical and those that are a priori or between two types of a posteriori proofs – physical (which rest on external observation) and moral (which rest on introspection). But such “physical” proofs have no value without metaphysical support. Like a priori proofs, they rely primarily on the principles of reason. So we'll take a different approach, dividing proofs of the existence of God into only two categories – metaphysical and moral. What follows will demonstrate the need for this distinction.
Let's begin by examining metaphysical proofs for the existence of God. The definition we've given allows us to introduce a certain order into the exposition.
One philosophical doctrine – which historically has gone by various names – denies the existence of reason and recognizes only consciousness and external perception. A version of this doctrine, called sensualism, derives everything from sensation. This theory, which was advanced by Democritus and later by Epicurus and the Stoics, explains knowledge as a function of idea-images. Working from the assumption that the only action is that of like producing like, the sensualists hold that the soul, like the body, is material. Nevertheless, the soul remains distinct from the “atoms” that bodies in space throw off from themselves. These atoms, or εἴδδωλλα (images), are like condensed images of the bodies, and as they strike us the images become imprinted on the soul, leaving impressions representing the bodies from which they emanated. These impressions are ideas.
Over the years, the crudeness of this theory was gradually recognized. To improve it, the notion of consciousness was added to external perception, so that knowledge might be derived from experience alone. This doctrine, initially formulated by Locke, is called empiricism. According to the empiricists, the mind prior to experience is like a wax tablet on which nothing has yet been written – a tabula rasa, or blank slate.
More recently, an even stronger version of empiricism has been developed in England. Because it grants an important role to the association of ideas, this version is called associationism.
Habit is often defined as a tendency to repeat an action that's already been performed many times. But this definition, which goes back to Aristotle, is subject to several objections. First, if an action is simply continued over an extended period, it can become habitual without being repeated. Even with this correction, however, Aristotle's definition still might be criticized. It's true that a habit grows stronger with repetition, but the self has a tendency to reproduce an action after performing it just once. Continuity or repetition develops but doesn't constitute this initial seed. So to study habit in itself, and to really understand it, we'll have to take a fresh approach and examine habit in its normal state, as it develops after the single performance of an action.
Looked at this way, habit has two characteristics. First, it's a faculty of preservation – it ensures the survival of our past actions. The second characteristic is that the action preserved tends to reproduce itself, so that later it seems to appear out of nowhere.
So habit is the faculty that preserves our past actions as well as the force that tends to reproduce them.
We might also say that habit has almost all the characteristics of instinct, but to a lesser degree. First, instinct is unconscious, while habits become more unconscious the stronger they are, so that an extremely strong habit can make us act almost as unconsciously as does instinct.
The approach to ethics we've been developing rests on a single principle – moral responsibility. Up to this point, on the assumption that moral responsibility is a function of moral consciousness (consciousness in the realm of moral affairs), we've merely postulated this principle, not fully discussed it. Moral consciousness is a kind of judge that pronounces sentences on our actions and those of others. Because we judge ourselves as well as others, we felt justified in arguing that moral responsibility is the foundation of theoretical ethics. Moral consciousness can be clear or muddled, conscious or unconscious, mistaken or sound, enlightened or ignorant – but no one is completely without it. And because moral consciousness is universal, so is moral responsibility.
From this we deduced that human activity is governed by a moral law. Inquiring into the nature of this law, we examined, in order, morality and interest, the ethics of sentiment, and the morality of Kant. We concluded that the foundation of the moral law lies in the idea of finality – a conclusion that has two advantages:
The idea of finality has immediate implications for action, so that passion and calculations of interest need not play any role in ethics;
Men need not attempt actions that would be absurd or impossible.
The very conception of our end implies the will to realize it.