Skip to main content Accessibility help


  • Access
  • Open access


      • Send article to Kindle

        To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Spatial Voting Meets Spatial Policy Positions: An Experimental Appraisal
        Available formats

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Spatial Voting Meets Spatial Policy Positions: An Experimental Appraisal
        Available formats

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Spatial Voting Meets Spatial Policy Positions: An Experimental Appraisal
        Available formats
Export citation


We develop and validate a novel experimental design that builds a bridge between experimental research on the theory of spatial voting and the literature on measuring policy positions from text. Our design utilizes established text-scaling techniques and their corresponding coding schemes to communicate candidates’ numerical policy positions via verbal policy statements. This design allows researchers to investigate the relationship between candidates’ policy stances and voter choice in a purely text-based context. We validate our approach with an online survey experiment. Our results generalize previous findings in the literature and show that proximity considerations are empirically prevalent in purely text-based issue framing scenarios. The design we develop is broad and portable, and we discuss how it adds to current experimental designs, as well as suggest several implications and possible routes for future research.


We thank Michael Bechtel, Sacha Kapoor, Vitalie Spinu, Alexander Wagner, the associate editor Kenneth Benoit, and three anonymous referees for helpful comments and suggestions. Youri Rijkhoff provided excellent research assistance. Granic gratefully acknowledges financial support from the Netherlands Organisation for Scientific Research through VIDI project 452-13-013. Replication files are available at the American Political Science Review Dataverse:


Recent developments in experimental research on the theory of spatial voting have deepened our understanding of how candidates’ policy positions translate into voting behavior (Claassen 2007, 2009; Tomz and van Houweling 2008, 2009). By virtue of combining formal modeling with suitable experimental designs, these studies contribute important insights to a lively debate about the shape and form of voters’ judgment on candidates’ policy stances. One of their main findings is that proximity considerations—voters preferring candidates closer to themselves in a policy space—outweigh non-proximity considerations; the extent to which this holds true varies with demographic variables and the policy domain considered.1

Current experimental designs typically represent candidates as numerical points on line-resembling scales.2 We argue that these designs limit the generalizability of their conclusions in two ways. First, wide-spread evidence documents that preferences can be influenced by the mode of thinking induced by elicitation methods (Lichtenstein and Slovic 1971). The question format’s narrow spatial framing may thus be leading and suggestive, thereby creating an artificial inflation of (certain) spatial considerations. Second, the question format deviates from the way in which political actors typically communicate their standpoints on policy issues via public speeches or writings. Whether or not, and how voters take into account spatial considerations thus depends on their cognitive abilities to transform speech or text to numerical values on the policy scales.

To overcome these problems, we propose to augment current experimental designs with text-based, objectively verifiable issue positions that acknowledge voters’ cognitive realities. Such a design provides a critical stress test of whether proximity models provide an effective approximation of voting behavior in purely issue-oriented elections. The literature on measuring policy positions offers the appropriate tools for this endeavor (an overview is provided in Laver 2014). In particular, text-based scaling methods exist that convert the content of political texts into numerical policy stances (Benoit et al. 2012; Lowe et al. 2011). Each scaling method is accompanied by a coding scheme to classify text units. The classified text is then scaled by various means to locate candidates’ numerical policy positions on latent policy dimensions.

These methods can be used to represent candidates’ policy positions in terms of text-statements in experiments by a reverse-engineering process. The researcher first constructs a relevant spatial distribution of candidates in a given policy space and then inverts the text-scaling methods to yield political texts compatible with the original spatial distribution. The advantages of such an approach are evident. For one, it gives rise to ex-ante, theoretically justified numerical representations of candidates’ policy stances in a spatial-free context. Furthermore, it acknowledges a natural cognitive aspect of issue voting. Political text is everywhere. From social media platforms to candidates’ personal websites to voting advice applications, political actors use policy statements that are similar to those offered in the coding schemes of scaling methods to interact with voters.

We designed and carried out an internet experiment on a general US population to demonstrate the validity of our proposed approach. Our design mimics the status quo of experimental research on spatial voting with one important exception: candidates are represented by text statements that follow recommendations laid out in the existing literature on how to scale policy positions from text (Benoit et al. 2012; Lowe et al. 2011). We find that between 72% and 76% of voters cast votes in accordance with proximity considerations. More precisely, these voters cast votes that minimize the distance between their own issue stance and the text-based, theoretically calculated candidate position on a left-right economic policy dimension. We further find average voters’ assessments of candidates to be in line with theoretically calculated policy positions. The mean absolute deviation between the average voters’ assessment and their theoretical stance is 0.34 measured on an 11-point scale. In other words, voters seem capable enough to accurately transform unambiguous political texts into numerical stances. We also find proximity considerations to be more prevalent in voters with political experience, proxied by political platform membership and past participation in elections. These results are compatible with recent categorization-based models of spatial voting that predict prevalence of proximity preferences as voters gain political experience (Collins 2011).

In summary, we propose and empirically validate a fruitful interplay between experimental research on the theory of spatial voting and the literature on measuring policy positions. Our design serves as a blueprint for testing the generalizability of established experimental results concerning the theory of spatial voting. In the next section we present our experimental set-up and review the text-based scaling methods that are appropriate for designing a critical stress test of proximity voting. Finally, we discuss our results and conclude with implications for current and future research.


We designed an internet survey experiment and recruited 401 participants from a general US population via the research platform Prolific.3 Detailed descriptive statistics of the sample can be found in Table A.1 in the online supplementary materials.4 Our experiment presented participants with a presidential election scenario involving three candidates. To minimize the influence of non-issue considerations, the candidates were labeled neutrally and referred to as A, B, and C throughout the experiment. Each candidate was represented by five statements mainly concerning the economic policy that the candidate would implement, if elected.5 The statements were based on examples laid out in the code-book of the Manifesto Project, which estimates policy positions derived from content analysis of electoral manifestos (Budge et al. 2001; Klingemann et al. 2006; Volkens et al. 2013). The code-book provides coding instructions to categorize each statement of a political text as a reference to the political-left, the political-right, or an unrelated or neutral reference (Werner, Lacewell, and Volkens 2015).

Our three candidates were designated as follows: L was left-wing, M was centrist, and R was right-wing. L made three ‘left’ statements, one ‘neutral’ statement, and one ‘right’ statement. The centrist candidate M was represented by five neutral statements. R made three right, one neutral, and one left statement. Of note, in the experiment, candidate-labels A, B, and C were randomly assigned to candidates L, M, and R to minimize any labeling effects.

Our composition of left, neutral, and right statements created the necessary conditions for a stringent test of proximity considerations. That is, participants were required to interpret a variety of text-statements in order to understand each candidate’s nuanced policy stance. The statements were chosen to represent moderate and credible stances—avoiding topics publicly debated at the time of the experiment—related to the policy dimension of state involvement in the economy, i.e., expanding versus reducing the active role of the government in the economy (Benoit and Laver 2007; Lowe et al. 2011). The exact statements and the corresponding coding categories are presented in Table A.2 in the supplementary online materials.

Participants were first shown the description of the three candidates and then asked to rate each on a 100-point thermometer rating scale. The question wording and display format was taken from the NES with ratings between 0 and 50° expressing unfavorable feelings toward a candidate, and ratings between 50 and 100° expressing favorable feelings. The thermometer ratings provided us with a first indication of voters’ preferences in a spatial-free context.

We used the thermometer ratings to create critical voting conditions. Specifically, following the thermometer rating questions, each voter was asked to imagine that they were are about to cast a vote in the election. For reasons not further specified in the instructions, only two of three candidates decided to run for office. The candidate who dropped out of the race was the one that the voter evaluated most favorably with her or his thermometer rating.6 This procedure forced every participant to make a compromising choice and ensured that ‘proximity’ voters had to develop a complete spatial representation over the full candidate-set. In this sense, our critical voting conditions constituted a stringent test of proximity considerations. Participants were then presented with the aforementioned voting scenario, casting their vote in a two-candidate race.

We next asked participant to place themselves and the candidates on a left-right economic policy dimension. Using the question wording and response format from the Chapel Hill Expert Survey (CHES) 2010 (Bakker et al. 2012), we explained that candidates on the economic left wanted government to play an active role in the economy whereas candidates on the economic right emphasized a reduced economic role for the government. Voters then placed themselves and each candidate on this 11-point scale with 0, 5, and 10 representing the far-left, the center, and the far-right, respectively. We deemed the CHES 2010 question appropriate due to its empirical validity and immediate connection to the economic policy statements we used to describe candidates.

The experiment was administered in three different treatments to stringently test the empirical validity of our proposed methodological crossover. In the baseline version (N = 204 participants), respondents went through the survey questions in one sitting, in the same order as described above. In the delayed version (N = 89), we separated the self-placement of respondents and their decision to vote by approximately seven days via a two-wave design. In wave 1, participants saw the same survey as in baseline except for the voting scenario. In wave 2, participants saw the candidate description once more and were then presented with the voting scenario. This design allowed us to test the robustness of our findings with regard the temporal ephemerality and the stability of proximity considerations.7

Both the baseline and the delayed versions employed slider measures as the response format. Slider measures clearly resemble a one-dimensional policy space, which may induce spatial considerations on their own. Our final version, the text-input version (N = 100), therefore replaced the slider measures with conventional text-input boxes. Otherwise, the text-input version was identical to the baseline version, i.e., participant went through the survey questions in one sitting in the same order as in the baseline version. Screen-shots from the decision screens for each question type can be found in the online supplementary materials.

The experiment concluded with a set of socio-demographic questions. Data was collected in a time-frame of about three weeks, beginning at the end of October, 2017, and wrapping up mid-November, 2017. The baseline version and wave one of the delay version were launched simultaneously after data collection for the text-input version had been completed.


We borrowed from the literature on measuring policy positions to transform text statements into a numerical scale and adopted the RILE score formula—and crucial modifications to it—to scale the right-left ideological economic-policy position of our candidates on the basis of the statements they make.8 We considered three scales currently applied in the literature (Budge et al. 2001; Kim and Fording 2002; Lowe et al. 2011): the unconditional or raw RILE, which measures the relative frequency of right statements in relation to the relative frequency of left statements; the conditional RILE, which discards neutral references from the calculation; and the empirical Logit Scale of Position, which measures the relative balance between left and right statements. Let Ls and Rs denote the absolute number of left and right statements of a political text within a fixed multi-category policy dimension, and let S denote the total number of statements. The different measures thus take the following form:

$$RIL{E_{Raw}} = {{Rs - Ls} \over S},\;RIL{E_{Conditional}} = {{Rs - Ls} \over {Rs + Ls}},\;Logit\;Scale = \log \left( {{{Rs + 0.5} \over {Ls + 0.5}}} \right).$$

The theoretically calculated candidate positions were then linearly projected onto the left-right economic policy dimension, thereby obtaining candidate and voter stances in the same policy space. Table 1 presents the theoretically calculated candidate scores. Our main measure of proximity considerations were distance-minimizing votes, i.e., votes that minimize the distance between voters’ self-placement, and the theoretically calculated candidate scores on the economic left-right dimension.

TABLE 1. Theoretical Candidate Scores and Their Projections on the 11-Point Economic Policy Dimension.


We begin our analysis by discussing voters’ self-placement and candidate-placements on the CHES 2010 economic left-right scale, ranging from 0 (left) to 10 (right). Figure 1 presents the corresponding results. Participants placed themselves at the center-left, with an average score of 4.53 (median 4). The designated right-wing candidate R was on average placed at 6.89 (median 7), the center candidate M received an average placement of 4.33 (median 4), and the designated left-wing candidate L was on average placed at 3.47 (median 3). Comparing the average and median candidate-placements to the theoretical ones obtained through the Logit and RILE scales reveals a high degree of congruency between voters’ perceptions of the candidates and the theoretically calculated stances in the policy space.

FIGURE 1. Box-Plots for Candidates and Self-Placements on the CHES 2010 11-Point Left–Right Economic Policy Scale.

Notches represent non-parametric estimates of 95% confidence intervals for the medians. ‘X’ marks the corresponding means with 95% confidence intervals.

Using the Logit Scale as our benchmark—which has the lowest mean squared error among the three scales we consider—placement of voters differed by 0.12 points (= |6.89 − 6.77|) for R and by 0.24 (= |3.47 − 3.23|) for L. The largest difference of 0.67 (= |4.33 − 5.00|) can be observed for M. Although speculative, one possible explanation is that, in times of polarized debates surrounding the presidency of Donald Trump, centrist positions might have been perceived as anti-incumbent and, therefore, more leftist than they actually were.

These results provide strong evidence that voters are endowed with the cognitive ability to convert unambiguous political text into reasonable numeric policy stances.

We next analyze whether voters are able to use this information to vote for the candidate closest to their own position. We calculate the distance between each participant and each candidate as being the absolute distance between her or his self-reported placement and the theoretical placement based on the Logit and RILE scales. Table 2 presents the absolute and relative frequencies of participants who voted for the candidate with minimum distance.

TABLE 2. Absolute (N) and Relative Frequency (N%) of Participants Who Cast Distance-Minimizing Votes.

Distance was calculated between voters' self-reported stances on the CHES left-right scale and the theoretically calculated Logit and Rile measures for the candidates. The ‘Expected’ relative frequency was calculated under the assumption of uniform-random behavior.

*** Signify significance of exact binomial tests on equality of observed and expected relative frequencies at the 1%-level (all p-values were adjusted according to Holm–Bonferroni).

To account for the possibility of errors, we compare these values to the expected relative frequencies obtained under uniform random behavior, i.e., random voting and self-placement behavior. Across all three treatments and all three scales, at least 70% of participants (Logit Scale, Baseline) cast distance minimizing votes ranging up to almost 79% (Raw RILE, Delay). Using an exact binomial test, we reject the null hypothesis of uniform random behavior for each treatment and each scale at the 1% level (applying the Holm–Bonferroni correction to account for family-wise error rates). These results validate the empirical relevance of proximity considerations in a purely text-based framing of issue positions, and generalize previous findings in the existing literature. Instructively, but also coincidentally, our estimate of proximity considerations is in line with the combined proximity and discounted proximity estimates—between which our design cannot discriminate—of Tomz and van Houweling (2008). We also calculated the post-hoc achieved power for each statistical test reported in Table 1. Setting alpha at the conventional level of 5%, the smallest achieved post-hoc power over all tests was 98%. We also computed the a-priori required minimum sample size to detect the significance of our observed effect-sizes. We thereby set alpha and beta to the conventional levels of 5% and 20% (= 80% power), respectively. For all tests, actual sample sizes exceeded the required minimum sample sizes by a factor of at least 2.4 (i.e., actual sample size > 2.4 * required sample size).

We conclude our analysis with an investigation of antecedent factors of proximity considerations. So far, we have used cognitive ability as a catch-all phrase but have not discussed the potential mechanism relating cognitive ability to spatial preferences. One possibility is formalized in the theory of categorization-based spatial voting (Collins 2011). This theory posits that voters categorize candidates and have preferences over categories. As political experience grows, voters build finer and more distinct categories; the finer the categories, the closer their preferences resemble proximity considerations.

This theory draws on well-established findings that show that cognitive abilities relating to categorization are not fixed and can be improved through practice, and, hence, experience. We consider two proxies for political experience: political platform membership and participation in previous elections. We calculate the average marginal effect of platform membership and previous participation on the probability of casting distance minimizing votes, based on probit estimations. In accordance with categorization-based spatial voting, platform membership and previous participation are associated with an 11.3 and 11.7 percentage point increase in the probability of casting distance minimizing votes (p-values are 0.015 and 0.064). However, this interpretation warrants caution: The observed associations cannot be interpreted as causal effects because voters with more consistent proximity preferences could also simply self-select into higher rates of political participation.9 Nevertheless, we posit that our results are compatible with the view that political experience affects how voters judge the policy stances of candidates.


We build a bridge between experimental research on spatial voting and the literature on measuring policy positions to increase our confidence in the conclusions drawn from the former. We demonstrate the feasibility and fruitfulness of this approach via an internet survey experiment. Our experimental results generalize previous findings, showing that proximity considerations are empirically prevalent within a purely text-based framing of issue positions in the policy domain we study. We further identify political experience as a vital mechanism underlying proximity considerations.

Beyond our specific observations, we extend a call for future research to test the generalizability of experimental results. The route should be one of systematically relaxing theoretical assumptions to create a more realistic and ecologically valid testing design. Our experimental design is portable and adaptive, and serves as a blueprint for experimental research on spatial voting. By applying this design, candidates’ issue positions can easily be constructed on the spot and tailored to the needs of specific research questions. For example, as in Tomz and van Houweling (2008), they can be constructed on the basis of voter input to tease out different forms of spatial considerations.

Extensions to multi-dimensional issue spaces or text ambiguity and uncertainty pose no difficulty as appropriate techniques are readily available in the vast literature on measuring policy positions (Benoit, Mikhaylov, and Laver 2009; Lowe et al. 2011; Slapin and Proksch 2008). It is even conceivable to extend the idea of measuring voters’ positions in a spatial-free context by allowing them to describe their policy stance in terms of pre-defined statements. We hope that the flexibility and enormous potential of our proposed methodological cross-over are self-evident and that it will inspire future research, across a variety of contexts, to close our knowledge-gap on how voters judge candidates and how they act upon these judgments.


To view supplementary material for this article, please visit

Replication materials can be found on Dataverse at:

1 Spatial models occupy a prominent role in political science as they capture two interdependent decision processes at the heart of democracy: voters’ choices to support candidates representing their interests and candidates’ choices to take a stance on issues appealing to voters (Black 1958; Downs 1957; Enelow and Hinich 1990). We refer to Grofman (2004) for a review of proximity-based spatial models and to Tomz and van Houweling (2008) for a discussion of the different theories and their implications for voter choice.

2 This approach is known as the Formal Theory Approach or FTA (see Morton and Williams 2010, Chapter 6). In an FTA, the experimental design bears as close a resemblance as possible to the formal theories that are tested. The FTA implements fully ideal conditions with the aim of evaluating the extent to which the theories under investigation organize experimental observations. Extrapolating findings from fully ideal conditions that constitute abstract simplifications of the real world, to the intended domain of application, requires a critical assessment of which formal assumptions are likely to be violated in the intended domain, and how these violations matter for the inferences we wish to draw.

3 Prolific is a well-proven research platform that connects researchers with respondents, providing a source of high-quality data (Peer et al. 2017). All participants received monetary compensation in accordance with Prolific’s general terms and conditions, which ensure fair pay.

4 Our sample shows the typical characteristics of a general online panel and is more or less representative of the US adult population. In particular, mid-range income levels, males, and higher educational levels are overrepresented whereas the highest household income level, females, and individuals with lower levels of education are underrepresented. The complete data-set will be made available on the authors’ websites.

5 We have chosen economic policy due to its prominent role in public discourse. See:

6 Such withdrawals are quite common in real elections. For example, after his withdrawal from the campaign, millions of Bernie Sanders supporters in the 2016 US presidential election de facto faced a choice between the center-left democratic party nominee Hillary Clinton and her right-wing republican counterpart Donald Trump.

7 In total, 97 participants were assigned to the delay version, of which eight finished wave 1, but failed to complete wave 2. We therefore obtained 89 complete responses for the delay version and 393 complete responses for the experiment.

8 Saliency theory, the theoretical underpinning that gives rise to the raw RILE score as a scaling method for manifesto coded texts, can be criticized on various grounds (Dolezal et al. 2014; Lowe et al. 2011). We have therefore specifically followed recommendations laid out in the current literature on how to scale a policy position from manifesto coded text that avoids common pitfalls (Benoit et al. 2012; Lowe et al. 2011).

10 We calculated the post-hoc achieved power for each statistical test reported in Table 1. Setting alpha at the conventional level of 5%, the smallest achieved post-hoc power over all tests was 98%. We also computed the a-priori required minimum sample size to detect the significance of our observed effect-sizes. We thereby set alpha and beta to the conventional levels of 5% and 20% (= 80% power), respectively. For all tests, actual sample sizes exceeded the required minimum sample sizes by a factor of at least 2.4 (i.e., actual sample size >2.4 * required sample size).

9 We would like to thank an anonymous referee for pointing out that our experiment does not allow us to unequivocally identify the direction of causality. We believe that disentangling the two explanations could be an interesting endeavor. One possible avenue for future research, therefore, could be to randomly assign categorization-learning tasks to politically inexperienced individuals.


Bakker, Ryan, de Vries, Catherine, Edwards, Erica, Hooghe, Liesbet, Jolly, Seth, Marks, Gary, Polk, Jonathan, Rovny, Jan, Steenbergen, Marco, and Vachudova, Milada A.. 2012. “Measuring Party Positions in Europe: The Chapel Hill Expert Survey Trend File 1999–2010.” Party Politics 21 (1): 143–52.
Benoit, Kenneth, and Laver, Michael. 2007. “Estimating Party Policy Positions: Comparing Expert Surveys and Hand-Coded Content Analysis.” Electoral Studies 26 (1): 90–107.
Benoit, Kenneth, Laver, Michael, Lowe, Will, and Mikhaylov, Slava. 2012. “How to Scale Coded Text Units without Bias: A Response to Gemenis.” Electoral Studies 31 (3): 605–8.
Benoit, Kenneth, Mikhaylov, Slava, and Laver, Michael. 2009. “Treating Words as Data with Error: Uncertainty in Text Statements of Policy Positions.” American Journal of Political Science 53 (2): 495–513.
Black, Duncan. 1958. The Theory of Committees and Elections. Cambridge, UK: Cambridge University Press.
Budge, Ian, Klingemann, Hans D., Volkens, Andrea, Bara, Judith, and Tannenbaum, Eric. 2001. Mapping Policy Preferences: Estimates for Parties, Electors, and Governments 1945–1998. Oxford: Oxford University Press.
Claassen, Ryan L. 2009. “Direction Versus Proximity: Amassing Experimental Evidence.” American Politics Research 37 (2): 227–53.
Claassen, Ryan L. 2007. “Ideology and Evaluation in an Experimental Setting: Comparing the Proximity and the Directional Models.” Political Research Quarterly 60: 263–74.
Collins, Nathan A. 2011. “Categorization-Based Spatial Voting.” Quarterly Journal of Political Science 5: 357–70.
Dolezal, Martin, Ennser‐Jedenastik, Laurenz, Müller, Wolfgang C., and Winkler, Anna K.. 2014. “How Parties Compete for Votes: A Test of Saliency Theory.” European Journal of Political Research 53 (1): 57–76.
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper and Row.
Enelow, James, and Hinich, Melvin. 1990. Advance in the Spatial Theory of Voting. Cambridge, UK: Cambridge University Press.
Grofman, Bernard. 2004. “Downs and Two-Party Convergence.” Annual Review of Political Science 7: 25–46.
Kim, Heemin, and Fording, Richard C.. 2002. “Government Partisanship in Western Democracies, 1945–1998.” European Journal of Political Research 41 (2): 187–206.
Klingemann, Hans D., Volkens, Andrea, Bara, Judith, Budge, Ian, and McDonald, Michael. 2006. Mapping Policy Preferences II: Estimates for Parties, Electors, and Governments in Eastern Europe, European Union and OECD 1990–2003. Oxford: Oxford University Press.
Laver, Michael. 2014. “Measuring Policy Positions in Political Space.” Annual Review of Political Science 17: 207–23.
Lichtenstein, Sarah, and Slovic, Paul. 1971. “Reversal of Preference between Bids and Choices in Gambling Situations.” Journal of Experimental Psychology 89: 46–55.
Lowe, Will, Benoit, Kenneth, Mikhaylov, Slava, and Laver, Michael. 2011. “Scaling Policy Preferences from Coded Political Texts.” Legislative Studies Quarterly 36 (1): 123–55.
Morton, Rebecca B., and Williams, Kenneth C.. 2010. Experimental Political Science and the Study of Causality: From Nature to the Lab. New York: Cambridge University Press.
Peer, Eyal, Brandimarte, Laura, Samat, Sonam, and Acquisti, Alessandro. 2017. “Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research.” Journal of Experimental Social Psychology 70: 153–63.
Slapin, Jonathan B., and Proksch, Sven-Oliver. 2008. “A Scaling Model for Estimating Time-Series Party Positions from Texts.” American Journal of Political Science 53 (3): 705–22.
Tomz, Michael, and van Houweling, Robert P.. 2008. “Candidate Positioning and Voter Choice.” American Political Science Review 102 (3): 303–18.
Tomz, Michael, and van Houweling, Robert P.. 2009. “The Electoral Implications of Candidate Ambiguity.” American Political Science Review 103 (1): 83–98.
Volkens, Andrea, Bara, Judith, Budge, Ian, McDonald, Michael, and Klingemann, Hans D.. 2013. Mapping Policy Preferences from Texts III. Statistical Solutions for Manifesto Analysts. Oxford: Oxford University Press.
Werner, Annika, Lacewell, Onawa, and Volkens, Andrea. 2015. “Manifesto Coding Instructions: 5th Fully Revised Edition.” February.