Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-05-07T17:59:33.734Z Has data issue: false hasContentIssue false

Proposal: A Political Science Peer Review and Publication Consortium

Published online by Cambridge University Press:  28 October 2020

John Gerring
Affiliation:
University of Texas at Austin
Daniel Pemstein
Affiliation:
North Dakota State University
Rights & Permissions [Opens in a new window]

Abstract

The traditional process of peer review and publication has come under intense scrutiny in recent years. The time seems propitious for a consideration of alternatives in political science. To that end, we propose a Peer Review and Publication Consortium. The Consortium retains the virtues of the traditional peer-review process (governed by academic journals) while also mitigating some of its vices.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press on behalf of the American Political Science Association

The traditional system of peer review and publication that is centered on academic journals has much to recommend it. However, the system is not without flaws, which we enumerate as follows without further commentary.

  1. 1. It is wasteful. Countless hours are spent by authors, editors, and reviewers in the review process, especially when that process involves multiple rounds or multiple journals.

  2. 2. It is slow. For those studies that make it from submission to publication in a journal (not necessarily the journal that authors initially submitted to), the process is long.

  3. 3. It is adversarial. Authors beg to get in, and editors and reviewers fight them off. Reviews are sometimes terse, impolite, or even offensive. All of this is discouraging for authors and may have a deleterious effect on their overall productivity and well-being—and, by extension, on productivity and well-being in the discipline at large.

  4. 4. It is uninformative. Editors are forced to make an up or down decision on every submission, reducing the complexities of research—which are inevitably multidimensional and matters of degree—to a dichotomous outcome.

  5. 5. It is expensive and access to published work is restricted. Accordingly, those without a university affiliation are generally unable to access journals, which exist behind a paywall.

  6. 6. It is exclusionary. Some unknown quantity of work that might make a contribution to knowledge never makes it into print because journals either will not publish it or authors do not submit it for publication. This is often the case when results confirm accepted wisdom. It also may be the case for studies that are especially innovative insofar as the review process contains a bias in favor of the status quo. Or perhaps novelty is embraced by reviewers but only if it does not threaten existing theoretical frameworks, leading to a superfluity of theories.

  7. 7. It is idiosyncratic. The journal review process is subject to the editorial team, or the specific editor who is in charge of a manuscript, and the judgments of two or three chosen reviewers. Studies show (and readers’ experiences can probably confirm) that reviews, and review outcomes, are highly stochastic.

  8. 8. It is biased. Reviewers often know the identity of the author whose manuscript they are reviewing—even if the review process is blinded—and this may affect their judgment.

It is uninformative. Editors are forced to make an up or down decision on every submission, reducing the complexities of research—which are inevitably multidimensional and matters of degree—to a dichotomous outcome.

In light of these shortcomings, it seems appropriate to consider available alternatives. To that end, we propose a Peer Review and Publication Consortium. The Consortium retains the virtues of the traditional peer-review process (governed by academic journals) while also mitigating some of its vices. A full version of this proposal, including relevant citations to the literature, is presented as an online appendix to this article and posted online at the Social Science Research Network. (We welcome input!) The following discussion is a brief overview of selected features of the proposal.

ORGANIZATION

This section describes the envisioned organization. Members of the Consortium include all those who submit (authors) or review manuscripts (reviewers). Editorial Teams manage the review of manuscripts. An Oversight Committee oversees the process of peer review and publication. An Ethics Committee sets policies concerning plagiarism, protection of human subjects, confidentiality of authors and reviewers, and other matters. An Executive Council governs the Consortium.

Over the Consortium, we envision a relationship with the American Political Science Association (APSA). In this capacity, the APSA Council might serve a general oversight function while decisions about internal governance would be left to the Consortium. Another option would be to associate with an existing body (e.g., the Consortium on Qualitative Research Methods) or to develop a stand-alone body with its own tax-exempt ID, thereby not beholden to any organization.

Because Editorial Teams do most of the work of the Consortium, it is important to clarify their composition and function.

EDITORIAL TEAMS

Editorial Teams are formed in an open market with minimal barriers to entry. Any group of political scientists could form a new Editorial Team at any time, subject to vetting by the Consortium, which would implement background checks on each proposal thereby avoiding Editorial Teams that might be fraudulent.

Editorial Teams could establish distinctive missions, specifying the type of work they are interested in publishing. Authors who want to publish with a specific Editorial Team would work to achieve those objectives or find another Editorial Team whose desiderata are more closely aligned with their own. Some teams might specialize in particular methods or epistemologies; others might specialize in particular substantive topics or areas of the world. Some teams might offer a venue for exploratory work or work that describes new theoretical frameworks. Others might focus on work with strong internal validity, perhaps with the requirement that studies be preregistered and reviewed before data collection (i.e., registered reports). Some teams might specialize in disciplinary issues, book reviews, reviews of the literature, topics of public interest, or public policy.

There is no limit, in principle, to the number or type of Editorial Teams that might develop, and the size and composition of each Editorial Team presumably would vary with the volume of submissions and the number of discrete areas of expertise that those submissions require. Likewise, we do not think it is necessary to define strict boundaries around what “political science” is or should be. Indeed, the Consortium could easily be extended to include adjacent fields in the social sciences.

Each Editorial Team would select a title, just as journals do. The title would be noted prominently on the front page of every article published under its auspices. We suspect this would provide an identity for each team; however—in contrast to a journal—Editorial Teams would not have distinctive fonts, formatting, or hardbound copies to distribute.

It is difficult to predict how the competition among Editorial Teams might play out. Conceivably, the array of Editorial Teams in the Consortium might appear much like the current system of journals—although we suspect it would be a smaller field.

PROCESS

Under the Consortium, the peer review and publication process would work according to the steps in the following description.

An author submits a blinded version of her manuscript to the Oversight Committee. She signs a consent form to publish the article if it is not desk-rejected. She identifies five Editorial Teams, in order of preference, as possible destinations for her manuscript.

The Oversight Committee ensures that the materials are anonymized. To check for redundancy, the manuscript is compared to all published papers, books, and articles previously submitted to the Consortium using plagiarism-detection software (e.g., CrossCheck).

The manuscript is sent to the five Editorial Teams identified by the author, in order of preference. Each team may decide to either send the manuscript out for review or decline. If the latter, the team indicates whether it believes the article should be published at all. If all five teams refuse to review, the manuscript is sent back to the Consortium, which—considering the feedback from the five Editorial Teams—makes a final decision about whether or not to desk-reject it. If the decision is to desk-reject, the author may be given an opportunity to resubmit and guidelines on what is expected in the resubmission. If the decision is to publish, the Oversight Committee designates an Editorial Team to supervise the review process. That team must accept responsibility to oversee the review process, although it need not publish the article under its masthead.

The chosen Editorial Team identifies six reviewers (and backups if any decline), optionally drawing on recommendations provided by the Consortium database and review(er) measurement model (MM). Invitations to reviewers are issued from the main Consortium office, without identification of the Editorial Team that is in charge of the manuscript. (This ensures that reviewers do not tailor their comments and scores to norms and standards perceived to be specific to that team.) The review process is double-blind or triple-blind, according to the policies adopted by the Editorial Team. Reviewers fill out a survey, assigning scores to the manuscript across different dimensions—including style, review of the literature, theoretical coherence, provision of original data, problems of measurement, problems of design, internal validity, external validity, relative validity (i.e., How strong are the claims to validity relative to other studies on the same subject and relative to what might have been accomplished with a reasonable input of time and resources?), novelty, methodological contribution, data transparency, and breadth of appeal. For each question, reviewers specify their level of confidence. They also offer written comments, which should explain their scores and provide suggestions for how the manuscript might be improved.

Reviewers and authors may contact one another at any point in the process to clarify points about the manuscript or the reviews. Communication occurs through an anonymized protocol (à la Craig’s List) so that anonymity is preserved. This streamlines the review process, which is constrained to two rounds of review and therefore must come to a conclusion expeditiously.

The manuscript may be withdrawn if the author and a majority of reviewers agree. We expect this will occur in only a few rare cases (e.g., when reviewers notice problems that were not identified in the desk-reject phase).

If the manuscript is not withdrawn, the author is expected to revise in accordance with the reviewers’ comments and scores received and then to resubmit. After this point, no further modifications can be made to the manuscript or the background materials. If background materials are housed separately, they must be submitted with a time-stamp. This eliminates the possibility of slippage between the version that is reviewed and the version that is published.

In the second round, the six reviewers are asked to review the author’s responses and the revised manuscript, along with one another’s scores and comments, and to revise their own comments and scores. They are discouraged from raising new issues—unless they are in response to changes the author made in the revised manuscript—because the author has no further opportunity to revise.

Authors are given one final opportunity to respond to the reviewers’ comments and scores—although they are not permitted to alter the text of their manuscript or background materials.

Three breadth reviewers are chosen by algorithm to answer only one question: the manuscript’s breadth of appeal to the discipline. (This question also is addressed by the six main reviewers.) The algorithm chooses three people at random whose work falls in different subfields, none of which is the manuscript’s (declared) subfield. They view the final (revised) version of the manuscript and are not privy to the reviews. Presumably, this reviewing task can be accomplished by perusing the manuscript’s abstract, introduction, and conclusion.

The Editorial Team now makes a final decision about whether to publish the article. If it is rejected, the manuscript (with associated reviews) goes to the next Editorial Team on the author’s list (if that team has not already desk-rejected it) and so on until a team accepts it. If none of the five teams on the author’s list claims the manuscript, it is published without an editorial masthead as a free-standing article in the Consortium. (This is analogous to a book published with a university press outside of any book series.)

The article is copyedited, typeset, and published online under a Creative Commons license. The title of the Editorial Team (if any) appears prominently on the front page of the published article, along with the editor and the team that oversaw the review. Following the article are the products of the peer-review process—the reviewers’ final written comments (anonymous), the author’s final responses to those comments, and the reviewers’ final scores—including the raw scores, adjusted scores produced by a MM designed to anchor scores to a common scale, and confidence intervals around those adjusted scores. If the Editorial Team has chosen to write a review of the article, this also is included.

Post-publication, a digital community space allows discussion of a published article to continue. Here, any member of the Consortium may comment on the article and the author may respond. Authors also may submit corrected versions of their article or background materials. Every entry would be permanent and attached to a unique DOI. Each comment would include the author’s name, institutional affiliation (if any), and email address to discourage scurrilous comments. No editing would be required except (if necessary) to redact inappropriate comments.

When serious mistakes in a published work are discovered, the Editorial Team that supervised the review process decides about retraction or correction. Reviews and scores accompanying the original article may be withdrawn or revised.

We now discuss various aspects of the process.

MEASUREMENT MODEL

A core objective of any system of peer review is to provide clear and unbiased signals to producers and consumers of social science about the quality of work that is published.

A core objective of any system of peer review is to provide clear and unbiased signals to producers and consumers of social science about the quality of work that is published.

Unfortunately, there is only so much that can be done to encourage reviewers to offer unbiased reviews. After all, reviewers have different standards. Moreover, the review process has certain structural features that—at least in some circumstances—inhibit dispassionate deliberation and honest scoring. We suspect that many biases are unconscious.

However, a large pool of reviewers and a structured system of scoring afford the possibility of enlisting methods from the field of measurement to adjust scores so that some biases (namely, those that can be measured) are mitigated. It also allows us to use patterns of ratings to norm scores across reviewers, to measure and adjust for reviewer reliability, and to provide assessments of uncertainty around reviewer evaluations.

PAYOFFS

We anticipate that the Consortium would produce the following payoffs.

Efficiency

Although we require six reviewers for each submission (plus three reviewers who evaluate only the breadth of a topic), we anticipate a substantial reduction in the overall review burden because each manuscript is reviewed only once rather than repeatedly across multiple journals. The timeline from initial submission to publication also will be much shorter, enhancing the productivity of the discipline.

A Cooperative Relationship

The current review system is oriented toward reaching a decision about whether to publish. Editors and reviewers must guard their journal’s reputation by rejecting substandard scholarship, preserving scarce space for only the highest quality work. Top journals in the social sciences reject more than 90% of submissions. Naturally, an adversarial relationship develops in which authors clamor at the gates while editors and reviewers pour hot wax from the ramparts. Under the Consortium, the goal is to publish everything that adds something to the sum-total of human knowledge. Accordingly, the bar for desk-reject is set fairly low and everything sent out for review is published (unless the author and a majority of reviewers decide otherwise). Moreover, there is no obligation for authors to kowtow to reviewer suggestions with which they do not agree. It is the “author’s cut” that appears in print. The role of the review process, therefore, is quite different. Editors and reviewers offer advice to an author so that the manuscript can be improved. Editors and reviewers, who presumably have strong ties to the field or subfield in which the study falls, have an incentive to make each article as strong as it can be so that the field can move forward. Not only is this process more pleasant, it also may have positive repercussions for the quality and quantity of work produced in the discipline.

Under the Consortium, the goal is to publish everything that adds something to the sum-total of human knowledge.

Volume and Diversity

Lowering barriers to publication presumably stimulates the publication of more studies, about more topics, and with a wider diversity of approaches than currently appear in the annals of political science. First, more studies with unexciting findings would be published, combating the current tendency to discriminate against work with null results or results that confirm standard wisdom. This would combat the “file-drawer” problem, resulting in a less-biased knowledge base on which to gauge the probable truth of various hypotheses.

Second, more studies with speculative arguments and findings would be published, combating another supposed tendency of the current journal-based system of review, which is alleged to discriminate against unconventional ideas and those that contravene reviewers’ theories. This should make it easier to publish work that is in an early stage of development and this, in turn, should accelerate the progress of science.

Third, more studies of nontraditional subjects might be entertained. As it stands, political science journal editors impose boundaries on what they consider to be appropriate topics for their journal. If there is no journal in place that considers an author’s topic to be topical, the author is out of luck. Because “politics” is difficult to define, there is a certain arbitrariness to these boundaries, and we sense that they may be excluding some interesting and important work.

Length

Space limitations on what might be considered eligible for publication would presumably be relaxed. In the Consortium, all articles are published online; therefore, arbitrary limits do not arise from the obligation to print and mail hard copies. Copyediting and typesetting are expensive, but these logistical elements of the traditional publication process may be dispensed with, as discussed previously. In any case, the issue of length limits would be left to Editorial Teams, which may choose to impose limits or not. We imagine that teams might take different approaches to this issue, with some being amenable to longer submissions. Some teams might even specialize in book-length publications.

Meritocracy and Internationalization

The current system of journal-centered peer review and publication likely favors established authors and those with ties to top journals. It explicitly favors articles published in top journals, meaning that studies published in lower-ranked journals may not receive the attention they deserve. In making these criticisms, we are not advocating for an egalitarian system in which all articles are assumed to have equal merit. Rather, we are advocating for a system that gives every article and every author an equal opportunity—that is, a publishing meritocracy.

Human Capital

To enhance progress in political science, we must make the best use possible of the limited human capital available. We do not view political science as a discipline in which progress can be achieved by a few geniuses or a small cadre of great minds working at elite institutions. Creativity is difficult to identify, so the more minds that we enlist in thinking about a task, the more opportunities there are for fundamental breakthroughs. Likewise, progress is not simply a matter of producing good ideas; those ideas must be tested rigorously, which entails iterated studies in different settings using standardized protocols. Cumulation of knowledge cannot occur without numerous replications. It follows that the machinery of peer review and publication must be oriented toward mobilizing a veritable army of scholars. As far as we know, the current system is not up to the task. The traditional system of journal-centered publishing operates with a guild format. However, this format also contributes to an insider–outsider cleavage that undermines the credentials of a supposedly meritocratic discipline and minimizes incentives for those on the periphery, who may not have equal access. Likewise, as the field of political science grows and as more academics embrace the goal of research, the system of peer review and publication must be able to accommodate an increasing flow of manuscripts. Most of this growth is likely to occur outside of the traditional bastions of North America and Europe. By contrast, the Consortium—which handles most tasks digitally—should be able to manage the growth and internationalization of political science. Likewise, new Editorial Teams are easy to form and their administration and output can be monitored. In this way, there is less risk emanating from “dark corners” of the publishing world.

Dissemination

Under the current system, access to journals is restricted to those with a university affiliation or a good public library. Under the proposed system, access is unrestricted. Being more accessible, the Consortium should enhance the influence of political science, making it a truly public endeavor. It also should help to internationalize the discipline of political science, making publications available to those in poor countries and in areas distant from universities and public libraries. All that is necessary to access materials in the Consortium is an electronic device that connects to the Internet.

Personnel Decisions

The Consortium presumably would have an impact on personnel decisions—hiring, promotion, tenure, and salary. The major difference relative to the current system is that the Consortium offers substantially more information for gatekeepers to consider. The traditional journal-review system offers only one type of information—the reputation of the journal where an article is published. Under the Consortium, committees could similarly judge the reputation of the Editorial Team that publishes each article. They also could examine the scores—raw and adjusted—received by a manuscript along various dimensions. The committees could easily examine where manuscripts and their authors sit within the network of disciplinary communities and topics. Moreover, they could read the reviewers’ reports and the authors’ responses. They also could read the reports issued by the Editorial Team, if any. This should give committees a good sense of candidates’ strengths and weaknesses and their contribution to the field. Impact may be measured by traditional citation counts as well as by page views and downloads from the Consortium system. Committees that want to evaluate candidates’ willingness to provide public goods could examine their reviewing record—that is, the number of reviews they conducted, the number of times they declined an invitation to review, and their reviewer reliability score (see the extended MM discussion in the online version of this article).

Clearly, the Consortium provides substantially more information to gatekeepers attempting to decide whom to hire and promote and how much to compensate them. Importantly, this information is comparable. The MM-adjusted scores for one article can be compared to the scores received by another article, at least at the level of a given scholarly community or topic. Moreover, because written comments from reviewers are obtained in a process that is identical across the Consortium (i.e., reviewers do not know which Editorial Team for whom they are reviewing), their reviews also are comparable. By contrast, it will never be entirely clear how to weight the value of publications in different journals. Is an American Political Science Review worth two Political Research Quarterlys? Or three? These are not answerable questions.

Finally, because manuscripts wend their way through the review process much more quickly in the Consortium than in the traditional journal-centered system, there is a longer record to peruse. A quick publication schedule means that citation counts also accrue quickly. Once an article is published, other published articles may appear with citations to it in the next year or two. By contrast, in the traditional journal-centered system, it takes several years for a citation record to become meaningful as a measure of impact. Accordingly, academics at the beginning of their career do not have a track record that can be evaluated.

LEARNING ABOUT THE PRODUCTION OF KNOWLEDGE

There is much we do not know about the production of knowledge in political science. Is there demonstrable scientific progress? Where is progress most marked and across which dimensions? To what extent do different subfields share methodological values, as revealed by their judgments about specific articles? How consequential are the epistemological divides? More generally, how should we understand disagreement among scholars in the evaluation of a study? How frequent is this disagreement and how severe? What type of studies elicits the most agreement or disagreement? Is consensus (or dissensus) increasing over time? The Consortium provides a database with which these—and many other—questions can be evaluated.

SUPPLEMENTARY MATERIALS

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S104909652000102X.

Supplementary material: PDF

Gerring and Pemstein supplementary material

Gerring and Pemstein supplementary material

Download Gerring and Pemstein supplementary material(PDF)
PDF 589.8 KB