The set of numbered items below reflect, in no particular order, reflections about what to do and what not to do in research and comments on various current disputes in the discipline.
1. Albert Einstein once almost said, “Our models should be as simple as possible—but no simpler.”Footnote 1 That remains good advice in developing statistical models.Footnote 2 Unfortunately, the plethora of so-called control variables in political-science regressions suggests that most of us have no real idea of what matters, or why, or exactly how. It is rather like a football pile-up with many bodies: somewhere underneath the players is the ball, but we cannot be sure where until the rubble has been removed.Footnote 3
2. Data analysis has its own version of Gresham’s Law: namely, “Easily available data drive out the painful, time-consuming, and often very costly effort to collect your own data”—even though those data may be far more relevant to the problem(s) about which you (and others) care.Footnote 4
3. Good methodology and good research design, although essential to good science, are not a substitute for good theory and well-thought-out concepts.Footnote 5 Neither do they make you smart or prevent you from making stupid mistakes in making sense of the world, although they can help you to avoid certain kinds of errors. Any tool must be used correctly to be of any real use, and no tool is smarter than the person who uses it. For example, to get Gary King’s programs to present useful answers, you need questions worth answering, and if your ideas are all a-muddle, improving your graphics will not necessarily help much.Footnote 6 Similarly, if Richard Fenno “soaks and pokes,” he derives important insights into how Congress members think and behaveFootnote 7; if I were to soak and poke, all I likely would get is irritated interviewees and dirty bath water.
4. Theory-building and empirical work go hand in hand.Footnote 8
5. Every new methodological technique or tool promises more for those who use it than it actually delivers.Footnote 9 However, that is not to say that we have not seen great improvements in methodology in my lifetime.Footnote 10
6. It has been said, “Give a graduate student a hammer and he’ll discover that everything is a nail.”Footnote 11 Unfortunately, as Karl Marx once said, there is no single “royal road to truth.”Footnote 12 Those who jump on a methodological bandwagon very, very early might be lucky enough to have an article published in a top journal while the bloom has not yet left the rose and the methodology is still being seen as on the cutting edge. After that, the article probably has to make an actual substantive contribution. Relatedly, Gary King observed that there are tradeoffs between a focus on teaching “cutting-edge” methods and teaching all of the methods that might be useful to graduate students in their subsequent careers. The first has the limitation that by the time you leave graduate school, it is likely that the methods taught will be old hat; the second has the limitation that by the time you leave graduate school, it is likely that you will be quite old.Footnote 13 King’s solution to obsolescence is to teach the fundamentals—that is, the underlying theory of inference from which all models are developed.Footnote 14
7. We should applaud the recent push to use methodology that allows us to get a better handle on causalityFootnote 15—most important, methods that require us to directly examine change over time.Footnote 16 As I opined some years ago: “Trying to get at causality with cross-sectional methods is like trying to tell time with a stopped watch; you might get it right, but only by accident.”Footnote 17
8. Most political scientists are too lazy to do historical analysis or they lack the necessary training to do it well. One of the few advantages of being older is that you recognize that things were not always as they are now (e.g., African Americans were not always overwhelmingly Democrats; the parties were not always polarized around abortion). Thus, growing older is the ordinary scholar’s substitute for studying history.Footnote 18
9. The search for mechanism appears to be the present-day methodological substitute for the search for the Holy Grail.Footnote 19 Here, a warning made by Jon Elster some decades ago (1989, 3–12) seems worth repeating, although the rephrasing is my own: “For any mechanism one can suggest, it is likely that there is another (not necessarily equal) mechanism also at play in the same setting that would give rise to a quite different set of outcomes.” Although it is important to identify mechanisms that might be involved in any social process, deciding why some turn out to be more important than others in given settings is the real trick.Footnote 20
10. Most of the supposed distinctions between qualitative and quantitative research are wrong.
10A. I do not have a problem with a distinction between qualitative and quantitative work defined in terms of level of measurement of key variables—that is, calling research qualitative if it involves nominal or ordinal variables and quantitative if it involves interval or ratio variables.Footnote 21 However, it is important to recognize that such a divide is far from hard and fast. There is almost always an issue of choice in how to operationalize a variable. For example, do we use nominal categories for religious groupings or do we take into account “levels of religiosity”? If the latter, do we seek only an ordinal coding or do we look for quantitative measures? Even more important, however, it is a mistake to think that because you are initially doing work involving only nominal or ordinal variables, that standard statistical tools are therefore useless.Footnote 22 You just have to be sure to use the right statistical tools.Footnote 23
10B. If the qualitative-quantitative distinction is in terms of whether the hypotheses being tested are couched in quantitative as opposed to ordinal terms, most work that claims to be quantitative really is not. It is, at best, ordinal. Taagepera (Reference Taagepera2008) reminded us that almost never does continuing research in an area of political science examine whether the (regression) parameters estimated from earlier work are being reproduced; all that is asked is whether the signs on given variables are in the predicted direction.Footnote 24
10C. Another reason to be suspicious of the qualitative-quantitative distinction is suggested by Taagepera (Reference Taagepera2008), who was originally trained as a physicist. He observed that if you look at theory in physics, you will discover that it involves relatively few variables; draws on closely interlinked sets of theories involving those variables; that the “dimensionality” on both the right-hand and the left-hand sides of the equation is the same (i.e., E = IR); and that some of the parameters that it estimates (e.g., the speed of light and the force of gravity on Earth’s surface) are rather “fundamental.” From the perspective of a physicist, there is virtually nothing in political science that counts as theory, regardless of whether it calls itself quantitative or qualitative.Footnote 25
If the qualitative-quantitative distinction is in terms of whether the hypotheses being tested are couched in quantitative as opposed to ordinal terms, most work that claims to be quantitative really is not. It is, at best, ordinal.
10D. Sometimes the qualitative-quantitative divide is defined in terms of how many cases are in the dataset. However, that also is not a very useful a distinction—although it is interesting that some qualitatively oriented scholars use this essentially quantitative distinction (i.e., “how many”) to decide where a particular piece of research falls.Footnote 26 The more cases you have—assuming that they comprise a random sample from some posited set—the easier it is to determine when the relationship(s) you find can be ruled out as the effects of chance. However, there is no magic cutoff; ceteris paribus, the clearer the pattern, the fewer the cases needed to figure out that what you see is probably not due to chance or to reject a hypothesis as false. Sometimes, an N of 1 will work if the hypothesis is deterministic rather than probabilistic in form. If you posit that A always implies B and you find A without finding B, then the hypothesis is contradicted by the evidence.Footnote 27
10E. Sometimes the qualitative-quantitative divide is defined in terms of concepts that are supposedly “inherently” qualitative because they involve the attribution of meaning to social constructs. In my view, there are no inherently qualitative or quantitative concepts; there are only issues of level of measurement and of which tools we use to generate data. Consider, for example, social identity, considered to involve perceptions of self (in particular, regarding oneself as a member of a particular group and identifying with the collective interests of the group). Are political scientists who study party identification, which is defined by Campbell et al. (1960) as a form of social identity, socially transmitted across generations, and—like other social identities, such as religion—relatively fixed, doing qualitative work? If not, why not? Does the answer change if we observe that many studies of voting behavior use open-ended survey questions that then are recoded? Does the answer change if we observe that partisan identification is normally coded as a seven-point ordinal scale? What about quantitatively oriented research that addresses the question of whether party identification has the same “meaning” in different countries (see, e.g., various essays in Bartle and Bellucci Reference Bartle and Bellucci2009)? Is it qualitative because it is concerned with meaning? Or, consider norms, another area often singled out as uniquely qualitative. Is the work of Bicchieri (Reference Bicchieri2006) and Axelrod’s classic essay (1986) on “meta-norms” somehow not about norms because they use formal models to study the properties of norms or the stability of norms? Is the game-theory-inspired work of Chwe (Reference Chwe2001) on “rituals” somehow not kosher because it does not rely on thick description?
10F. Sometimes the qualitative-quantitative distinction is drawn in terms of a whole slew of expectations about what kinds of models are needed to make sense of the world, of which perhaps the most important is that qualitative scholars recognize that different mechanisms may be at play in different settings.Footnote 28 Bennett and Elman (2006, 455), for example, asserted that “qualitative methodologists tend to believe that the social world is complex, characterized by path dependence, tipping points, interaction effects, strategic interaction, two-directional causality or feedback loops, and equifinality (many different paths to the same outcome) or multifinality (many different outcomes from the same value of an independent variable, depending on context).”Footnote 29 I think it is fair to interpret the implication of these remarks as being that quantitatively oriented political scientists do not share these views about social reality. My response: “Any researcher with any sense agrees with you. Now let’s see how well some particular piece of research lives up to these standards of sophistication.” Any scholar with any sense who is concerned about generality tries to formulate explanatory mechanisms and models in a way that will maximize the scope of their applicability, recognizing that the same mechanisms can produce different outcomes in different contexts.Footnote 30 Good quantitative work is sensitive to possible interaction effects.Footnote 31 Good quantitative work is sensitive to possible two-way causality.Footnote 32 Good formal modeling is sensitive to the possibility of tipping-point effects.Footnote 33 Good quantitative work is sensitive to case-selection issues.Footnote 34 Similarly, path dependence is inherent in most game-theory models (in extensive form), where what branch of the game tree has already been chosen conditions what outcomes are feasible (see, e.g., Brams Reference Brams1975). Etc.Footnote 35
10G. Sometimes the qualitative-quantitative distinction is taken to be more epistemological, with qualitative researchers supposedly uninterested in (or skeptical about the feasibility of) empirical generalizations that apply widely or, perhaps, even uninterested in explanation or causality per se but instead concerned with developing detailed knowledge and insight about particular cases.Footnote 36 If that is the distinction, I do not have any problem with recognizing it as one that can meaningfully be drawn, but I am highly suspicious that there are that many researchers who eschew all claims to generality or explanatory power.Footnote 37 Also, if a leading scholar of Absurdistan said that he had identified mechanisms that perfectly explained that country’s behavior—but, of course, they applied nowhere else except AbsurdistanFootnote 38—it is unlikely that he would be nominated for the Skytte Prize.Footnote 39
10H. Only Americanists have been allowed to regularly get away with treating their case as sui generis, although the claim for “my country” exceptionalism seems to be pervasive.Footnote 40 Personally, I adhere to David Easton’s dictum from almost a half-century ago that “All political science is/needs to be comparative politics” (Easton, personal communication, 1968). I find compelling a line from Rudyard Kipling that is frequently quoted by my colleague Russ Dalton: “And what should they know of England who only England know?”Footnote 41 However, on the flip side, there is one commonsense piece of methodological advice for every graduate student who wants to be a comparativist, one that works regardless of their epistemological proclivities, namely: become a real expert on at least one case.Footnote 42 That way, you can test the generalities from larger-N studies or from experiments or from formal models or from other case studies to determine whether they really make sense. However, I also emphasize a point that I made in earlier work (quoted in Grofman Reference Grofman, Grofman, Lee, Winckler and Woodall1999) regarding how we should think about “comparative” politics in terms of the “TNT principle” that I see as defining comparative politics, namely, “Comparison across Time, Nations, or Types of institutions, persons, or processes.” In that framework, within-nation studies also can be comparative. Consider, for example, Posner’s (Reference Posner2007) work on when (a limited number of) linguistic cleavages or (a larger number of) tribal cleavages will form the basis of party competition in deeply divided societies. He showed that in Zambia, a shift from multiparty competition to single-party rule back to multiparty competition shifted incentives for campaigning strategies from linguistic to tribal and then back to linguistic (see also Posner Reference Posner2005).
11. Mixed methods that combine large-N studies with case methods are a current methodological panacea and, subject to the previous caveats, one to which I am quite sympathetic. Bennett and Elman (2006, 458) are clearly correct that “even when there are enough observations to allow statistical analysis, conducting in-depth case studies can still offer separate inferential advantages.” My main note of caution is simply that whereas combining knowledge and insight derived from case studies and/or experiments with larger-N analyses—to rule out spurious relationships and get at mechanisms and causality—is (and always was) the best way to go, doing so is not at all easy.Footnote 43 Moreover, mixed-methods training is sometimes merely a way to learn to do two or more methods badly rather than one very well, perhaps taught by someone who is more interested in convincing students of the existence of flaws in a disfavored method (or set of methods) than in teaching them how to use it as well as might be possible.
12. A different way to sidestep qualitative-quantitative debates was offered by Grofman (Reference Grofman2001). He suggested that we think of political science as involving three different types of puzzle solving—who-dunnits, how-dunnits, and why-dunnits—with a focus on particular situations to be analyzed and competing hypotheses or models. The first calls attention to competing notions about explanatory factors; the second, to a search for how particular factors achieve their effects; and the third, to explanations that are rooted in beliefs and values. However, these questions cut across the more usual qualitative-quantitative distinctions discussed previously.Footnote 44 In this vision, let whoever has the best answer to a particular empirical or theoretical puzzle catch the gold ring.Footnote 45
13. A far more useful distinction than the qualitative-quantitative one is between good work and not-so-good work, a distinction that—as far as I can assess—is very close to orthogonal to the qualitative-quantitative divide. The first rule of research is simple: “98.6%Footnote 46 of everything is crap.”Footnote 47 However, no study—no matter how good—is perfect. The second rule of research is: “No single study can address, much less answer, all questions.” The third rule of research is: “No single study can answer even one question definitively.”
14. The fourth rule of research is equally simple: “It is far easier to criticize than it is to do better.”Footnote 48
15. The search for a single master cause or master mechanism, of course, is silly. However, the number of trees sacrificed to debates about whether nations act the way they do because actors within them are “really” pursuing national interest (however defined), as opposed to “really” responding to an evolving international-norms regime, as opposed to “really” engaging in cooperative exchange behavior, perhaps under the shadow of a hegemon, is inherently very amusing to those of us who are not international-relations theorists and therefore not to be totally discouraged. Luckily, however, most good international-relations scholars do not waste their time in such debates but instead focus their brain power on the very difficult but quite intriguing set of questions about which factors will matter more in which contexts.Footnote 49
Thus, in honor of the Founding Fathers, I suggest this motto: “No reputation without replication.”
16. Correlation is not causation but it is a hell of a lot easier to report correlations than to plausibly demonstrate causation; therefore, correlations in political science are not going away any time soon.
17. The January 2014 issue of PS: Political Science and Politics included a debate about whether journals should require, after acceptance for publication, that datasets and codebooks and perhaps also a “diary” with sufficient detail be filed with articles to allow determination of exactly how reported analyses were done. Although the existence of “easy”-to-access datasets for reanalysis will divert certain students from doing some hard thinking on their own, and it will impose numerous costs on those who must prepare the materials from their publication for archiving and some nontrivial costs on the journals that store the materials, I think the benefits outweigh the costs. I view archived data as a public good.Footnote 50 I also am sympathetic to the idea that empirically oriented graduate students who claim statistical skills should not graduate until they show the ability to replicate (and critique) one major published study.Footnote 51 The other benefit is to the honesty of the profession. The only way to really know for sure whether to believe a published quantitative analysis is to rerun the data yourself or have a disinterested but inherently skeptical party do it.Footnote 52 There are too many ways that data can be “massaged” and methods tweaked.Footnote 53 Thus, in honor of the Founding Fathers, I suggest this motto: “No reputation without replication.”Footnote 54
18. To publish empirical work, it is probably sufficient to be NICE—that is, to have all four of the following conditions, or at least most of themFootnote 55:
New data and/or New findings and/or New theory and/or New methodology
Important and/or Interesting question(s)
Clearly written discussion
Evidence that is credible
However, whereas being NICEFootnote 56 almost certainly will guarantee publication somewhere, being NICE does not guarantee being published in a top journal. Accomplishing that is much more of a crapshoot.Footnote 57
19. Reality is like an unbelievably enormous jigsaw puzzle: if, as a social scientist, you fill in one itsy-bitsy piece before you die, then you will have done far more than most.Footnote 58
ACKNOWLEDGMENTS
Wuffle is indebted to Diana Kapizewski and Gary King for helpful suggestions on an earlier draft of this article and to two reviewers for useful comments, some of which are incorporated in the present version. However, perceived errors remaining in the article may best be attributed solely to the reader.