Social scientists long have debated how closely their research methods resemble the pure, classical ideal: sequentially (and absent pejorative “data snooping”) formulate a theory; develop empirically falsifiable hypotheses; collect data; and conduct the appropriate statistical tests. Meanwhile, in private quarters, they acknowledged that true research seldom proceeds in this fashion. Rather, in the event, the world is observed, data are collected, hypotheses formed, tests conducted, more data collected, and hypotheses revised (e.g., the classic Bernal 1974). Results are collected by the field's scientists into a body of knowledge that defines “known science” and sets the accepted boundaries for future research. Kuhn (1970) labeled this a paradigm. Occasionally, he argued, results appear that lie outside the bounds of the extant paradigm—then, innovation occurs.
The articles included in this issue's symposium discuss “registration” of empirical studies. The purpose is to reduce “publication bias,” that is, to prevent the scientific equivalent of schoolboy cheating: reporting tests of hypotheses that became evident only after the data were in hand. Registration has little power against the type of research fraud that is discovered from time to time. There are costs, however, when scientists operating within an accepted paradigm discourage researchers from exploring and reporting any/all relationships and correlations in a data set.