Skip to main content Accessibility help
×
Home

Information:

  • Access
  • Open access

Actions:

      • Send article to Kindle

        To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Enhancing Electoral Equality: Can Education Compensate for Family Background Differences in Voting Participation?
        Available formats
        ×

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Enhancing Electoral Equality: Can Education Compensate for Family Background Differences in Voting Participation?
        Available formats
        ×

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Enhancing Electoral Equality: Can Education Compensate for Family Background Differences in Voting Participation?
        Available formats
        ×
Export citation

Abstract

It is well documented that voter turnout is lower among persons who grow up in families from a low socioeconomic status compared with persons from high-status families. This paper examines whether reforms in education can help reduce this gap. We establish causality by exploiting a pilot scheme preceding a large reform of Swedish upper secondary education in the early 1990s, which gave rise to exogenous variation in educational attainment between individuals living in different municipalities or born in different years. Similar to recent studies employing credible identification strategies, we fail to find a statistically significant average effect of education on political participation. We move past previous studies, however, and show that the reform nevertheless contributed to narrowing the voting gap between individuals of different social backgrounds by raising turnout among those from low socioeconomic status households. The results thus square well with other recent studies arguing that education is particularly important for uplifting politically marginalized groups.

Footnotes

We are grateful for the detailed and helpful comments from Adrian Adermon, Anders Sundell, Pär Nyman, Martin Lundin, four anonymous reviewers and the editor Thomas J. Braeuninger. We also thank participants at presentations at the Institute for Evaluation of Labour Market and Education Policy in Uppsala, the Department of Government in Uppsala, the Toronto Political Behavior Workshop, and the University of Oslo. This project has been financed by IFAU, the Swedish Research Council, and the European Research Council. Information on the specifics regarding how to obtain the data can be found in the appendix.

In a democracy, political participation is the most basic means of voicing political concerns and influencing public policy. It is therefore a problem if groups in society differ in their capacity or willingness to participate in politics: passive groups risk having their interests neglected (Lijphart 1997; Schlozman, Verba, and Brady 2012; Verba, Schlozman, and Brady 1995). Differences in political involvement related to family background are especially problematic because they violate the basic democratic principle of equality of political opportunity. As Robert Putnam (2015) has pointed out, inherited political inequality brings us uncomfortably close to the type of political regimes democratic revolutions once targeted.

Existing research on the association between social origin and political participation indicates that these fears should be taken seriously. Above all, available empirical evidence shows that children with high socioeconomic status (SES) parents are considerably more likely to grow up to become politically active citizens than children from less privileged homes (Cesarini, Johannesson, and Oskarsson 2014; Gidengil, Wass, and Valaste 2016; Lindgren, Oskarsson, and Dawes 2017; Verba, Burns, and Schlozman 2003; Verba, Schlozman, and Brady 1995). This raises the question of how to break this chain and how to alleviate the social gap in political opportunity.

Traditionally, political scientists have placed great hopes in the equalizing potential of improved educational standards (Converse 1972; Nie, Junn, and Stehlik-Barry 1996; Wolfinger and Rosenstone 1980). As Schlozman et al. (2004, 34), for instance, explain,

Since education is such a powerful predictor of political engagement, rising absolute levels of education might be expected to facilitate the political activation of those at the bottom of the SES hierarchy and produce class convergence in participation.

But why should we expect rising levels of educational attainment to reduce class inequalities in political participation? There are two possible explanations for why this could be the case.

First, rising absolute levels of education may contribute to class convergence in participation by reducing the educational differences between individuals of different social backgrounds. Second, even if educational attainment is raised uniformly in all socioeconomic groups, increasing educational standards can improve political equality if the positive effect of education is larger among those brought up in low SES homes.

To study the importance of the latter channel, it is necessary to model the heterogeneity of the effect of education on political participation. However, whereas the issue of heterogeneous effects of education has attracted attention from sociologists and economists in recent years (Brand and Xie 2010; Carneiro, Heckman, and Vytlacil 2011), political science research on this topic is still rather scant. In particular, we are not aware of any previous study that examines how the effect of education on voter turnout varies across SES groups.

One likely reason for this is the methodological challenges associated with this type of analysis. First, obtaining sufficient precision in the estimates for particular subgroups often requires very large samples. Second, the well-known problem that educational choices may be confounded by different pre-adult experiences and predispositions requires that we exploit some form of (plausibly) exogenous variation in educational attainment (Berinsky and Lenz 2011; Kam and Palmer 2008; Persson 2014; Tenn 2007).

We attempt to overcome these challenges by using unique population-wide administrative data from Sweden to examine the impact of a major school reform implemented in the early 1990s on voter turnout among different socioeconomic groups. We combine individual-level turnout information for all eligible voters in the 2010 general election with data on the school reform that lengthened vocational training programs at the upper secondary level from two to three years and added more general theoretical content to the curriculum. An attractive feature of this reform was that it was preceded by an extensive pilot scheme in which the new system was tried out in a number of carefully selected municipalities. There is thus an arguably exogenous variation across regions and over time in the implementation of the reform that can be exploited to identify the effects of interest (Hall 2012).

Simply put, the current study seeks to contribute to the discussion of whether educational reforms can help enhance electoral equality by carefully studying two related research questions. First, is there an average effect of education on voter turnout in the population as a whole? Second, are there heterogeneous treatment effects of education on voter turnout by family background?

Similarly to other recent studies employing credible identification strategies, we fail to find any statistically significant average effect of education on voter turnout. A closer analysis, however, reveals that this average effect conceals important heterogeneities. We find that the education reform led to an increase in voter turnout among individuals from the most disadvantaged homes, but it did not affect the turnout of individuals from more privileged social backgrounds. Consequently, the reform helped reduce the overall voting gap related to family background by raising turnout at the lowest end of the socioeconomic spectrum. These results thus square well with recent research, which shows that the positive effect of civic education on political knowledge and interests mainly benefits politically marginalized groups (Campbell and Niemi 2016; Neundorf, Niemi, and Smets 2016).

EDUCATIONAL REFORMS AND POLITICAL EQUALITY

Students of political socialization have long recognized the important role played by both schools and parents in shaping adolescents’ political attitudes and behavior (e.g., Neundorf and Smets 2017). Schools, on the one hand, have been characterized as places where children learn important participatory skills and abilities, acquire social networks, and internalize the belief that political participation is a civic duty (Verba, Schlozman, and Brady 1995; Wolfinger and Rosenstone 1980). Parents, on the other hand, are assumed to influence the future political activity of their children by passing on their socioeconomic status or by nurturing their children to become politically active citizens (Gidengil, Wass, and Valaste 2016; Neundorf and Smets 2017).

The focus of this study lies in the intersection between these two factors. More precisely, we are interested in testing the hypothesis that reforms aimed at increasing educational opportunities can contribute to narrowing the voting gap between individuals of different social backgrounds. Under what conditions, then, should we expect educational reforms to facilitate class convergence in voting? Clearly, for the voting gap to decrease, the increase in educational attainment induced by the reform must have a greater impact on political participation among those at the bottom of the SES hierarchy. As briefly previewed in the introduction, there are two different possibilities for why this could be the case. 1

The first possibility is what can be referred to as the resource effect, where the reform affects the allocation of education (the resource) between SES groups. Available empirical evidence suggests that both the sign and the magnitude of the resource effect may depend on the type of educational reform being examined. Reforms that lengthen compulsory education, for instance, tend to have a larger effect on the educational attainment of children from low SES homes because these children are less likely to go on to secondary education (Lindgren, Oskarsson, and Dawes 2017). In contrast, Blanden and Machin (2004) found that policies that expanded noncompulsory education in the UK served to widen the educational gap between children from rich and poor backgrounds. Depending on the nature of the reform, the resource effect can therefore contribute to an increase or a decrease in the voting gap.

However, even if both advantaged and disadvantaged SES groups experience an equal increase in educational attainment as a result of the reform, implying that the resource effect is zero, the voting gap could nevertheless change if the impact of education on turnout differs across groups. This is what we characterize as the return effect. If formal education and a stimulating socializing family environment are substitutes in the process of developing the type of skills, interests, and norms conducive to political participation, we should expect there to be a larger effect of education on participation among low SES individuals for a given increase in educational attainment. Or, conversely, if these two factors are complements in the production of political participation, we should expect increased schooling to have a more pronounced effect among individuals from high SES homes (cf., Campbell 2008; Neundorf, Niemi, and Smets 2016). Whereas the focus of this study is on the interplay between education and family background, this basic analytical framework is more generally applicable. The simple resource–return effects distinction can be used to analyze all the factors driving political inequality in a society.

Yet, it is important to note that both the resource and the return effect presuppose that there is a causal impact of education on political participation. However, this assumption has been questioned by a number of methodologically sophisticated studies which argue that the correlation between education and political participation is spurious rather than causal. More specifically, education is said to operate as a proxy for pre-adult experiences and predispositions that are consequential but difficult to observe. According to advocates of this perspective, changes to the education system will therefore do little to reduce political inequality (e.g., Berinsky and Lenz 2011; Dinesen et al. 2016; Kam and Palmer 2008; Persson 2015; Tenn 2007).

In spite of its merits, there are, from our perspective, two limitations in recent research on the education–participation nexus. A first issue pertains to the importance of educational content. One frequently voiced view is that rising levels of education per se are unlikely to spur political engagement, but that it is primarily a “civic or social science curriculum that imparts the skills and resources necessary to be active in the political realm” (Hillygus 2005, 28). Despite decades of research, there still remains great uncertainty both about the participatory effects of education in general and about those of civics studies in particular. Second, with a few exceptions (Campbell and Niemi 2016; Lindgren, Oskarsson, and Dawes 2017; Neundorf, Niemi, and Smets 2016), studies on the causal impact of education on political participation have been mainly concerned with estimating homogeneous treatment effects. The implicit assumption underlying this approach is that education is a standardized commodity that affects all types of individuals similarly. However, if the effect of education varies across groups, the average “treatment” effects that provide the main focus of previous research may conceal as much as they reveal. Most importantly, if the effect of education varies across groups, changes to the educational system may affect the equality of participation even if the average effect is close to zero.

Ultimately, it is an empirical question whether, and if so to what extent, policies designed to increase educational standards can prove effective in mitigating inequality in political participation. But, as should be clear from the discussion above, this is also a very demanding question to answer. First, and most importantly, distinguishing correlation from causation requires access to a policy reform that induces some form of (plausibly) exogenous variation in educational attainment. Second, at least part of the extra time spent in school should be devoted to the study of civics. Finally, to be able to say anything about the relative importance of resource and return effects, we need to study a policy that has a greater impact on educational attainment for some socioeconomic groups than others. In the next section, we argue that a major reform of Swedish upper secondary education meets these requirements and thus offers a suitable testing ground for examining this important issue.

A SUITABLE TEST CASE

Swedish students typically enter the upper secondary school system at age 16 after nine years of compulsory schooling. 2 Although upper secondary education is not mandatory, a majority of students go on to this level (about 85–90 percent of the students during the period under study). Students typically attend an upper secondary school in their municipality of residence. If the desired program is not available, they may attend an upper secondary school in a nearby municipality.

The Swedish upper secondary school system went through a major reform in the beginning of 1990s. Before the reform, students could choose between a number of two-year vocational training or three-year academic programs. The former had a strong focus on preparing students for working life and contained less theoretical study, whereas the latter was intended to prepare the students for higher education at the university level. After the 1991 reform, the length of all vocational training programs was extended to three years. 3 The reform also provided for a stronger theoretical content in the curriculum of these programs. In the pre-reform system, Swedish language studies had been the only mandatory theoretical subject provided in vocational training programs. After the reform, these programs also included English, social science, and an optional theoretical subject (mathematics being the most common choice).

The reform decision was, however, preceded by a pilot scheme in which the new three-year training programs were implemented in some municipalities for evaluation. The pilot scheme was run for four years (1987–90), and by the end of the period, the pilot scheme included around 20% of the available places on vocational training programs. The municipalities had to apply to participate in the pilot scheme, and the National Board of Education decided which municipalities to include.

When making this decision, the Board took several factors into account. First, it was important for the local labor market to be able to meet the demand for the extended working-life training included in the new three-year vocational training programs. Second, the Board tried to implement the scheme in different types of municipalities. Third, the Board desired some variation with respect to which different regions participated in the pilot.

The implementation of a pilot scheme class in a municipality was always accompanied by the withdrawal of a class in a corresponding two-year vocational training program in that same municipality. Thus, the reform did not increase the total number of available places on vocational training programs. Consequently, for a few years, the opportunities to attend a three-year vocational program depended on where students lived and when they were born.

In previous research, this variation in educational opportunities has been used to study the consequences of increased schooling on outcomes such as employment (Hall 2012), early fertility (Grönqvist and Hall 2013), and criminal activity (Åslund et al. 2018). Persson and Oscarsson (2010) compared the levels of political participation between students from vocational and academic programs before and after the reform was fully implemented on a national scale and concluded that differences in political participation persisted after the reform. However, this study was based on a small cross-sectional sample and did not allow for heterogeneous effects.

MODELING HETEROGENEOUS EFFECTS

A simple approach to allow for heterogeneity in the effect of education is to use a so-called split sample design and perform the statistical analysis separately for different socioeconomic groups. In describing this approach, we will, for pedagogical reasons, assume that there are only two types of family background (low and high SES homes), but in the actual empirical analysis, we will provide separate estimates for each quartile of our family SES variable. 4

Ideally, we would like to estimate the following regression model: 5

(1) $$V_{icm}^g = \alpha _0^g + \alpha _1^gD_{icm}^g + {{\bi{\boldlambda }}^g}{\bi{X}}_{{\bi{icm}}}^{\bi{g}} + \theta _c^g + \eta _m^g + \varepsilon _{icm}^g,$$

where $V_{icm}^g$ is a dichotomous indicator for voter turnout for individual i, starting upper secondary school in year c, and residing in municipality m. $D_{icm}^g$ is a dummy taking on the value 1 for individuals who completed a three-year training program, ${\bi{X}}_{{\bi{icm}}}^{\bi{g}}$ is a vector of individual-level covariates, and $\theta _c^g$ and $\eta _m^g$ are cohort and municipality fixed effects, respectively. The superscript g (gl, h) indicates that the effect of a third year of upper secondary education is evaluated separately for low (l) and high (h) SES groups.

If ${\bi{X}}_{{\bi{icm}}}^{\bi{g}}$ includes all relevant factors that may influence an individual’s educational choices as well as his or her voting behavior, estimating Model 1 using ordinary least squares (OLS) would lead to an unbiased estimate of the causal effect of completing an extra year of upper secondary schooling. However, as frequently argued (e.g., Kam and Palmer 2008), this is not likely to be the case because many of these factors are difficult or impossible to observe and measure correctly.

To circumvent this problem, we follow Hall (2012) and use the arguably exogenous variation in the length of training programs introduced by the pilot scheme that was designed to evaluate the proposed reform. Depending on when students were born and where they resided when they completed compulsory schooling, the students faced different opportunities. Some could choose from plenty of three-year vocational training programs, whereas others could only choose from the shorter two-year ones.

As a first step we estimate the following reduced form effect:

(2) $$V_{icm}^g = \beta _0^g + \beta _1^g{R_{cm}} + {{\bi{\boldzeta }}^{\bi{g}}}{\bi{X}}_{{\bi{icm}}}^{\bi{g}} + \theta _c^g + \eta _m^g + \xi _{icm}^g,$$

where R cm , as explained in more detail in the data section, is a continuous measure of reform intensity that indicates how large is the share of all vocational programs in a municipality that were of the three-year type, by the time an individual applied to upper secondary school. Consequently, $\beta _1^g$ is an estimate of the difference in turnout propensity between students whose only option, in case they wanted to pursue vocational studies, was to attend a two-year program (R cm = 0) and those whose only option was a three-year program (R cm = 1). Because the reduced form equation includes both municipality and cohort fixed effects, it can be interpreted as a generalized difference-in-difference model in which the effect of interest is identified by comparing the before-and-after difference in voter turnout between municipalities that were differentially affected by the reform. On a more substantive note, for the reform to contribute to a narrowing of the voting gap between individuals of different social background, the reduced form estimate in equation (2) must be larger among students from low SES homes (i.e., $\beta _1^l > \beta _1^h$ ).

In the theoretical section, we also suggested that any reform effect that reduces inequality may be driven by a resource or a return effect or both. In order to decompose the overall reform effect into these potential pathways, we can use the reform indicator as an instrument for completing a three-year program and estimate a two-stage least squares (2SLS) model. The first and second stages take the following form:

(3) $$D_{icm}^g = \gamma _0^g + \gamma _1^g{R_{cm}} + {{\bi{\boldtau }}^{\bi{g}}}{\bi{X}}_{{\bi{icm}}}^{\bi{g}} + \theta _c^g + \eta _m^g + \phi _{icm}^g,$$
(4) $$V_{icm}^g = \delta _0^g + \delta _1^g\hat{D}_{icm}^g + {{\bi{\boldomega }}^{\bi{g}}}{\bi{X}}_{{\bi{icm}}}^{\bi{g}} + \theta _c^g + \eta _m^g + \psi _{icm}^g,$$

where $\gamma _1^g$ is the effect of the reform indicator on completing a three-year training program and $\delta _1^g$ is the effect of completing a three-year program on turnout propensity. The resource channel is concerned with the extent to which the effect of the reform on schooling choices differs across SES groups. Thus, even if the effect of education on turnout is equal across socioeconomic groups $\left( {\delta _1^l = \delta _1^h} \right)$ , the reform will reduce inequality if $\gamma _1^l > \gamma _1^h$ and increase inequality if $\gamma _1^l \lt \gamma _1^h$ . However, a change in the turnout gap could also reflect a pure return effect if the resource effects are the same across the two groups $\left( {\gamma _1^l = \gamma _1^h} \right)$ whereas the impact of an extra year of schooling is greater among low SES students $\left( {\delta _1^l > \delta _1^h} \right)$ or among high SES students $\left( {\delta _1^l \,\lt \,\delta _1^h} \right)$ .

Our combined difference-in-difference and instrumental variable (IV) framework rests on a number of identifying assumptions. The most important among these concerns the (conditional) exogeneity of the reform; that is, R cm should be uncorrelated with other factors influencing the outcome conditional on the covariates included in the model. Given that our model includes a full set of municipality and cohort fixed effects ( $\theta _c^g$ and $\eta _m^g$ ), our main concern is the exogeneity of the reform to time-varying variables not included among the covariates. That is, our key identifying assumption is that of parallel trends: in the absence of the reform, the outcomes of interest would have followed the same time trends among those exposed as among those not exposed to the reform.

Unfortunately, the common trend assumption is not directly testable, but we have conducted a number of more indirect tests to determine the tenability of this assumption (see Section A.3.3 in the Appendix). In sum, these analyses show that the time trends in important political and socioeconomic factors such as voter turnout, partisan support, educational attainment, employment, and immigrant share of the population are strikingly similar in low and high reform-intensity municipalities. Moreover, we find no evidence that reform intensity is related to any important predetermined student characteristics, such as compulsory school GPA or parental SES.

Whereas the common trend assumption is sufficient to obtain an unbiased estimate of the reduced form effect, the IV interpretation [equations (3) and (4)] also requires the assumption that the intensity of the reform had no direct effect on voter turnout, but influenced turnout only indirectly by affecting the likelihood of completing a three-year training program. Whereas this assumption cannot be tested, we nonetheless find it plausible because it is difficult to see any reasons why reform intensity should be directly related to voter turnout.

Finally, despite the fact that our key dependent variable is binary, we rely on linear probability models to obtain our estimates. There are two main reasons for this. First, the difference-in-differences approach of the type used here loses much of its attractiveness when applied to nonlinear models (Lechner 2011). Stated in simple terms, the root of the problem is that the cohort and municipality effects (θ and η) in equations (2)(4) will not partial out if the model is estimated by a logit or probit model. Second, the IV approach requires much more stringent assumptions when applied to nonlinear models. This is particularly true in a case like this when we also have a binary endogenous regressor (e.g., Freedman and Sekhon 2010). However, we provide logit results in the Appendix as a robustness check.

DATA FROM POPULATION REGISTERS

We use data from various administrative registers maintained at Statistics Sweden to construct our sample and to acquire information on several socioeconomic and demographic variables. Our original sample consists of all individuals born in Sweden between 1970 and 1974. Because Swedish students normally enroll in upper secondary education at age 16, the cohorts born between 1971 and 1974 were subject to the pilot reform to varying extent, whereas those born in 1970 constitute a pure control cohort. For most analyses, we will restrict our attention to individuals who completed compulsory schooling at age 16 and who thereafter continued directly to upper secondary school. 6 We then use Statistics Sweden’s Multi-Generation Registry to link these individuals with their parents. In the final stage, the children and their parents are matched with various administrative registers containing information regarding educational attainment, income, occupational status, and other demographic and socioeconomic characteristics. 7

To construct a pilot scheme reform indicator for each individual in our sample, we follow Hall (2012) and use information on students’ municipality of residence when attending the last year of compulsory school together with information on the availability of different types of vocational training programs across municipalities. More precisely, the school reform indicator measures the number of three-year vocational training programs as a proportion of all vocational programs. 8

Figure 1 shows the distribution of reform intensity at the municipality level during the pilot years. The number of municipalities offering one or several three-year vocational programs grew steadily from just a handful in 1987 to three quarters of all municipalities in 1990. We can also see that less than half of the vocational programs were of the three-year type in most municipalities, but in later years there were in fact a small number of municipalities that only offered three-year programs.

FIGURE 1. Reform Intensity at the Municipality Level

One important question is whether there were any systematic differences between the municipalities that chose to participate in the pilot to a different extent. We present a brief analysis of this issue in the Appendix. The main finding is that high and low reform municipalities appear to have been rather similar. Most importantly, the time trends of various political and socioeconomic characteristics look very similar in municipalities with high and low reform intensity (see Figures A.8 through A.13), which speaks in favor of our identification strategy.

Family socioeconomic status constitutes another key variable in our analysis. Broadly defined, socioeconomic status (SES) relates to “one’s access to financial, social, cultural, and human capital resources” (NCES 2012, 4). To capture these various dimensions of SES, researchers have traditionally relied on composite measures including income, educational attainment, and occupational status. 9

Three criteria guided our choice of SES indicators. First, the factors should be well established in the literature on SES. Second, there should be high-quality indicators of these factors in our register data. Third, the factors should be known to be related to inequalities in turnout. Based on these considerations, and inspired by the PISA index of economic, social, and cultural status, developed by the OECD (2010), our measure of family SES is therefore constructed as a simple additive index of three items: (i) highest parental education, (ii) highest parental occupational status, and (iii) average parental earnings (see the Appendix for a detailed description of these items). To adjust for differences in scales between the variables, all subitems were initially standardized to have a mean of 0 and a standard deviation of 1. Consequently, our measure of family SES takes a value of 0 for an individual from a family with an average score on each of the three items and a value of 1 for an individual from a family that is situated on average 1 standard deviation above the mean on all items.

Turning to the dependent variable, we collected population data on voter turnout in the 2010 general election by scanning and digitizing the information in the publicly available election rolls. 10 The resulting dataset is unique in both scope and quality. Regarding the outcome of the 2010 election, the incumbent center-right government was reelected by a rather close margin, and overall turnout was 84.6 percent, which is a pretty typical figure for national elections in Sweden.

Before turning to the empirical analysis, it is useful to briefly discuss how our measure of reform intensity relates to educational choices and to the selection of students into different types of upper secondary programs. Our identification strategy rests on the simple idea that students were more likely to enroll in three-year programs in municipalities with a larger share of such programs. In the Appendix we present results showing that this was indeed the case (see Figure A.3 and Table A.3). More precisely, this analysis suggests three important conclusions. First, the share of individuals who did not enroll in upper secondary school at age 16 was unrelated to reform intensity. Our decision to exclude this group from the analysis should therefore not bias the estimates. Second, and as expected, the main effect of increasing reform intensity was to move students from two-year to three-year vocational programs. Third, there was, however, a slight tendency for students to shift from academic to three-year vocational programs for high values of reform intensity. This is also the reason why we include students from both vocational and academic programs in the main analysis. Studying all upper secondary school students mitigates the risk that changes in the composition of vocational students affect our results (Åslund et al. 2018). This being said, there are no obvious signs in the data that indicate that increased reform intensity actually altered the student composition of different programs. On the contrary, supplementary analyses show that the socioeconomic composition of students in academic and vocational programs were the same regardless of reform intensity (see Figure A.4 and Table A.2 in the Appendix).

DID THE REFORM AFFECT TURNOUT?

This section examines how the lengthening of vocational upper secondary education from two to three years affected voter turnout in the 2010 election. Figure 2 displays voter turnout by program length and family SES quartile for those attending vocational training programs. Four things can be noted. First, Sweden is a high turnout context. In the 2010 election, 84.6% of the electorate made use of their right to vote, and the average turnout rate in our main sample is as high as 90.0%. Second, despite the high average turnout rates, there are substantial differences in electoral participation across different family SES groups. Third, for all quartile groups, turnout is higher among those completing three-year vocational programs than among those completing two-year programs. Fourth, the voting gap between the two educational groups is smaller for individuals from more advantaged backgrounds. These results indicate that the lengthening of the vocational training programs may thus have helped increase and equalize voter turnout. However, an obvious problem with this analysis is that it is likely to suffer from endogeneity bias because the individuals choosing three-year vocational training programs are likely to have been different from those choosing two-year ones.

FIGURE 2. Turnout by Family Background and Program Length

To mitigate this issue, we use the exogenous variation induced by the pilot scheme. Table 1 reports how the availability of three-year vocational training programs in an individual’s home municipality at age 16 affected the probability of voting in the 2010 election. All results are presented as percentage points. The first panel of the table displays the dichotomous indicator for voter turnout regressed on the measure of reform intensity—that is, the share of three-year vocational training programs in a municipality—and a number of controls including gender (1 if female), immigrant background (1 if the individual or at least one parent is born abroad), family SES, year of birth, both parent’s year of birth, and municipality of residence. These reduced form coefficients give us the total effect of the reform for different groups. The first column presents the effect for the overall sample. As can be seen, we find no evidence that the reform raised expected turnout in the student group as a whole. Although the effect of the reform intensity variable is positive, it is small in magnitude and not statistically significant.

TABLE 1. The Effects of Reform Intensity on Schooling and Turnout (All Programs)

Notes: All models include a full set of fixed effects for birth year, home municipality, and father’s and mother’s birth years. Standard errors, shown in parentheses, allow for clustering at the municipality level. ***/**/* indicates significance at the 1/5/10% level. Results are presented as percentage points.

However, as we have argued, this type of average causal effect may conceal important heterogeneities in the reform effect across different groups. To examine whether the reform effect is contingent on social origin, we utilize a split-sample approach and estimate separate models for each quartile of the family background variable. The results are presented in columns 2 to 5. 11

The effect of the reform did indeed differ between groups. As can be seen, we find that the reform had an effect on turnout for children from low SES backgrounds. For individuals growing up in homes belonging to the lowest quartile of the family SES distribution, the reform is associated with a rather large and statistically significant increase in voter turnout. Increasing the share of three-year vocational programs from 0 to 1 is estimated to increase expected voter turnout by almost 3.1 percentage points in this group. In contrast, we find no statistically significant effect for any of the other quartiles. 12

These results indicate that the reform contributed to the equalization of voter turnout by raising turnout among individuals from the most socioeconomic disadvantaged homes. Next, we ask: what accounts for this reduction in the voting gap? Is it mainly due to a resource or a return effect? To answer these questions, the second two panels of Table 1 report the results from 2SLS models where reform intensity is used as an instrument for having completed at least three years of post-primary education by age 20.

The first-stage results presented in Panel B provide direct evidence on the resource effect among the different socioeconomic groups. The results indicate that the resource effect is more pronounced in the bottom of the family distribution. For children from the lowest quartile of the family distribution, the likelihood of completing three years of post-primary education is estimated to increase by more than 26 percentage points, as all vocational programs in a municipality are lengthened from two to three years. The corresponding figure for children in the highest quartile is about seven percentage points—just slightly more than one-fourth of the effect found for the most disadvantaged group. The reason why the resource effect decreases as we move up the social ladder is that children of higher social background are less likely to pursue vocational studies, and as such they were less likely to be affected by this reform.

The return effects are portrayed in the second-stage results presented in Panel C of Table 1. The coefficients give us the marginal change in the propensity to vote associated with completing at least three years of post-primary education at age 20. It is only among children from the most disadvantaged family background that we find a statistically significant effect of completing three years of post-primary education on voter turnout. In this group, completing a three-year program is estimated to increase the probability of voting by almost 12 percentage points. For the other three quartile groups, the IV estimates are considerably smaller in magnitude and not statistically significant. As is often the case with IV models, precision is an issue here. Yet, if we compare the difference in coefficients across groups, we find that both the differences between Q2 and Q1 (p = 0.035) and between Q3 and Q1 (p = 0.034) are statistically significant at the 0.05 level, whereas the difference between Q4 and Q1 (p = 0.382), despite being larger in magnitude, does not reach conventional levels of statistical significance. 13

Two important lessons can be drawn from these results. First, the educational reform under study helped decrease the socioeconomic voting gap by raising turnout among individuals of low social background. Second, the pattern of estimates presented in Table 1 implies that the return effect was more important than the resource effect in explaining the reduction in the voting gap. The differences in the return to education across the quartile groups (i.e., the IV estimates) are considerably more pronounced than the differences in the take-up of additional education (i.e., the first-stage estimates). 14 Thus, the results suggest that education and family background are substitutes in the production of political participation such that education, at least to some extent, can help compensate for various types of civic disadvantages associated with growing up in low SES homes (e.g., Campbell 2008).

However, before concluding that improved educational opportunities can help reduce political inequality by facilitating the political activation of those at the bottom of the SES hierarchy, we need to further examine the robustness of these findings.

DID THE REFORM IMPACT THE CORRECT GROUPS?

The previous analysis included all individuals who enrolled in upper secondary school at age 16. In doing so, the analysis safeguarded against the risk that the reform effect is driven by a change in the composition of students in vocational programs. However, we previously concluded that the reform had little impact on the decision to enroll in an academic program (Figures A.3 and A.4 in the Appendix). Thus, if our model is correctly specified, we should expect any reform effect to be concentrated among vocational students. Table 2 therefore presents separate results for students who enrolled in vocational (Panel A) and academic (Panel B) programs.

TABLE 2. Reduced Form Effect by Program Type

Notes: All models include a full set of fixed effects for birth year, home municipality, and father’s and mother’s birth years. Standard errors, shown in parentheses, allow for clustering at the municipality level. ***/**/* indicates significance at the 1/5/10% level. Results are presented as percentage points.

Our robustness analysis confirms that the reform effect is entirely driven by students from low SES homes attending vocational programs. When we restrict the sample to those attending vocational programs, the reform effect increases from 3.1 to 4.5 percentage points for this group. 15 The corresponding estimate for individuals attending academic programs is 0.06 percentage points.

The fact that we find no reform effect among those attending academic programs can also be interpreted as support for the common trend assumption underlying our identification strategy. If our findings were due to unobserved trends at the municipality level, we would expect the reform effect to be present for students at vocational and academic programs alike. An alternative way to test the common trend assumption is to utilize the fact that some individuals were either too old or too young to be affected by the reform.

Figure 3 presents results from a large set of “placebo regressions” in which we artificially change the date of the pilot scheme by ±1–15 years. The analysis focuses on individuals in the lowest quartile of the family distribution who graduated from upper secondary school between 1973 and 2008. 16 The upper graph shows the first-stage effect (the reform effect on completing a three-year program), whereas the lower graph illustrates the reduced form effect (the effect on turnout) from pre- or post-dating the reform by t years. For instance, if we pre-date the reform by four years (t = −4), we can examine how the reform intensity observed for the cohorts 1970–1974 impacted those born between 1966 and 1970, who were all too old to be affected by the reform. If we were to find an effect of this “placebo reform” in this age span, it would thus suggest the presence of pre-reform trends in the data.

FIGURE 3. Placebo Regressions for Q1 of the Family Distribution

The first treated cohort is composed of individuals born in 1971. We must therefore pre-date the reform by at least four years to obtain a pure pre-treatment placebo in which all individuals are untreated. It is thus comforting to note that the point estimates to the left of the first dashed line in Figure 3 are centered around 0 and are generally statistically insignificant. The positive coefficients for the years −1 and −2 can be explained by the fact that reform intensity is positively correlated over time such that municipalities with a high share of three-year programs in 1990 also had a relatively high share in 1989 and so on.

Likewise, in order to get a pure placebo sample when post-dating the reform, we need to postpone the reform date by at least eight years (the second dashed line), because the first cohort for which all vocational programs were of the three-year type was those born in 1978. 17 Again it is encouraging that the point estimates in the pure post-treatment period are statistically insignificant and hover around 0. The hump-shaped relationship found for the period +1 to +7 may at first sight appear a bit surprising, but it is fully explainable given how the new school system was implemented. After the pilot ended in 1990, municipalities had until 1994 to replace all two-year programs by three-year ones. In the period between 1991 and 1994, the share of three-year programs therefore had to increase faster in the municipalities that did not participate in the pilot (or that participated at a lower rate). 18 These results thus provide strong support for the common trend assumption underlying our identification strategy.

CONTINUOUS OR DISCRETE: WHAT DIFFERENCE DOES IT MAKE?

In the previous analysis we estimated separate models for different quartiles of the family distribution to test for heterogeneous effects. Admittedly, one risk with this procedure is that the results may be affected by the choice of grouping intervals. One alternative, but more complicated, approach is to use a multiplicative interaction model to study how the reform effect varies across the entire distribution of family background. However, in a recent contribution, Hainmueller, Mummolo, and Xu (forthcoming) strongly argue against the common practice of assuming a linear relationship for the conditional effect of interest. This warning is particularly well taken in the present case, given that the previous analysis suggests that the reform effect varies nonlinearly over social background.

In an attempt to accommodate these two demands, that is, respecting the continuous nature of the family SES variable and allowing for nonlinear marginal effects, we estimate a flexible regression model in which we interact a cubic spline function of family SES with reform intensity as well as with all covariates and fixed effects included in the model. A cubic spline is a piecewise cubic polynomial that is commonly used to model various types of nonlinear relationships (Beck, Katz, and Tucker 1998). The flexibility of the spline model is determined by the number of knots used for the piecewise function. By increasing the number of knots, we can estimate increasingly flexible regression models, but at the risk of overfitting. In Figure 4 we present the results from a spline regression with five knots. 19

FIGURE 4. Reduced Form and IV Estimates Using Cubic Splines With Five Knots

The upper two graphs display the reduced form and the IV estimates for the main sample, meaning that we include students in both vocational and academic programs. 20 The three vertical dotted lines are placed at the first, second, and third quartiles of the family background variable. The results from the spline regressions square well with those presented earlier. Both the reduced form and the IV estimates are largest, and rather constant, for the lowest quartile of the family distribution. The reduced form estimate is statistically significant at the 0.05 level when family SES is between −1.2 and −0.66, which corresponds to the interval between the 6th and 23rd percentile of this variable. For the IV estimates the corresponding figures are −1.2 (6th percentile) and −0.62 (25th percentile). In both cases, the magnitudes of the coefficient estimates are also very similar to those obtained with the split-sample design, that is, the reduced form estimate is about three percentage points and the IV estimate is just over 10 percentage points. The only exception is that the IV estimate is positive and rather large in the top quartile of the family distribution. Yet, as the large confidence intervals indicate, these estimates are very imprecise (because of the weak first stage) and should be interpreted with caution.

The bottom two graphs in Figure 4 display the results when we restrict the analysis to vocational students. Again the results are well in line with those previously presented. The reduced form estimate is now statistically significant at the 0.05 level over the interval −1.35 (4th percentile) to −0.59 (26th percentile) of family SES, and the IV estimate over the interval −1.35 (4th percentile) to −0.60 (26th percentile). Consequently, it does not seem to matter for our main results whether we treat family background as a discrete or continuous variable.

However, the issue of nonlinearity also may be raised in relation to our reform intensity instrument. All of the previous IV estimates are based on a linear first-stage equation. According to the standard textbook model of 2SLS with homogeneous treatment effects, this is fairly unproblematic because any deviation from linearity in the first stage will only affect the efficiency of the resulting estimator (Dieterle and Snell 2016). Unfortunately, things get more complicated in the presence of heterogeneous treatment effects because then the local average treatment effects (the so-called LATEs) could vary across the range of the instrument. To be more concrete, we can imagine that the individuals who are induced to attend a three-year program rather than a two-year one when reform intensity increases from 0.1 to 0.2 differ from those who are induced to attend a three-year program when the reform intensity increases from 0.8 to 0.9. If the effect of education on turnout depends on these differences, the marginal treatment effect will vary over the range of the instrument. In the presence of such treatment heterogeneity, the standard linear 2SLS model will identify a weighted average of all of these marginal effects although the informativeness of this average is often unclear (Angrist and Pischke 2008; Dieterle and Snell 2016).

In using a continuous instrument, we have thus implicitly imposed the assumptions of both linearity and homogeneity (Angrist and Pischke 2008, 150). 21 An important and remaining question is whether these are reasonable assumptions to make. The best way to check for nonlinearity is often by means of graphical inspection, and to judge from the simple graphical analyses that we have performed for both the first-stage and the reduced form relationships, they appear to be approximately linear within each quartile group (see Figures A.21 and A.22 in the Appendix).

Assessing the homogeneity assumption is somewhat less straightforward, but Dieterle and Snell (2016) recently proposed a simple diagnostic tool that can be used to detect unmodeled effect heterogeneity when using a single continuous instrument. The approach amounts to adding the square of the instrument to the first stage and conducting a standard overidentifying test. If we can reject the null hypothesis in this test, it means that the two instruments lead to statistically different parameter estimates of the effect of interest. Any potential difference between the linear and the quadratic first stage will be due to the fact that they assign different weights to observations in different parts of the instrument because the two models under comparison utilize the same instrument. Thus, obtaining different IV estimates with a linear and quadratic first stage would indicate heterogeneity not accounted for in the model specification.

The first row of Table 3 (labeled Quadratic) presents 2SLS results using a quadratic first stage. As can be seen, the IV estimates for these models are very similar to those previously obtained with a linear first stage. For instance, the IV estimates for Q1 is now 11.58 compared to the previous estimate of 11.62. There are thus no signs of important unmodeled heterogeneity in our model, which is further corroborated by the fact that we fail to reject the overidentifying restriction for all quartile groups. 22

TABLE 3. IV Estimates With Alternative Functional Forms for the First Stage

Notes: All models include a full set of fixed effects for birth year, home municipality, and father’s and mother’s birth years. Standard errors, shown in parentheses, allow for clustering at the municipality level. ***/**/* indicates significance at the 1/5/10% level. Results are presented as percentage points.

Although Dieterle and Snell (2016) advocate the use of a quadratic first stage for their test, the logic can be extended to more general functional forms. We therefore also present the results from two alternative specifications of the first stage in Table 3. In the second panel, we use a five-knot cubic spline function of reform intensity as instrument and in the third a full set of dummies for each unique value of reform intensity (rounded to whole percentage points). Despite the fact that at least the latter specification may push the data to the limits, the IV estimates from these models (labeled Splines and Unique values) are very similar to those obtained with the linear and quadratic specifications. 23 Hence, there is nothing in these analyses that would lead us to question the previously employed models. 24

We have also performed a number of additional sensitivity checks that we discuss in the Appendix because of space restrictions. For instance, we show that we obtain similar results if we include individuals who did not enroll in upper secondary school at age 16 in the analysis (Table A.6) or if we exclude municipalities without vocational programs from the analysis (Table A.7). Moreover, we obtain almost identical marginal effects if we use a logit model to estimate the first stage and the reduced form relationships (Table A.8). Finally, we have re-estimated models for each of the three subitems making up our family SES measure to substantiate that our results are not driven by the way we have operationalized family background. The results are well in line with those obtained in the main analysis (Section A.3.8).

MECHANISMS AND IMPLICATIONS

Given that we have found extended education to increase turnout among low SES students, a natural follow up question is what mechanisms explain this effect. Unfortunately, the data required for such an analysis are largely lacking in the administrative registers at our disposal. However, in the Appendix we present results from a simple mediation analysis, which indicates that potential mediators such as income, occupation, family status, and political activity in surrounding social networks can only account for one-fourth of the reform effect observed in the data (Table A.12).

Consequently, the lion’s share of the reform effect seems to be mediated via other pathways. In the Appendix we make some attempts to assess a few of these potential mechanisms. One possibility that we discuss is that the higher reform effect among low SES students is due to a ceiling effect in voter turnout (Neundorf, Niemi, and Smets 2016). The results from our logit model (Table A.8) does, however, speak against this interpretation, because we find similar results also when interpreting the logit coefficients in terms of odds ratios. This suggests that the lower return to education in higher SES groups is not primarily due to a ceiling effect because odds ratios, unlike probabilities, are not affected by the mean of the dependent variable (Mare 1980). Another possibility that we investigate is whether the effect can be explained by the fact that it made students more likely to live together with their parents when voting for the first time, but we do not find any support for this either.

Thus, we need to look elsewhere for factors mediating the reform effect. The most likely possibility is that the effect is driven by various factors more directly related to the nature and content of education, such as the skills and norms that the individuals learn in school. Unfortunately our data do not permit a direct test of the degree to which the reform effect on turnout is mediated by such factors.

This finally leaves us with two other important unanswered questions. The first concerns whether our findings can be thought to travel beyond the particular case of voting in Sweden, and the second concerns whether reducing the socioeconomic voting gap is likely to have any important real-world consequences. We briefly address both of these questions in the Appendix using complementary data sources. These analyses lead us to answer the two above questions in the affirmative.

First, using data from the European Social Survey (ESS), we find that the basic relationship between education, family background, and voting that we observe in Sweden appears to be valid for a large number of countries and participatory acts (see Section A.3.13 in the Appendix). Admittedly, this far from proves the generalizability of our findings, but at least it serves to indicate that our study case is not a completely unique one.

Second, although we cannot study the electoral consequences of the reform directly—because we lack individual-level data on party choice 25 —our supplementary analyses provide some indirect evidence supporting the view that reducing the differences in turnout between individuals of different social background may actually affect representational inequality. The ESS data show clear differences in political attitudes—with respect to economic redistribution and immigration—between individuals of different social origin (Figure A.27). These attitudinal differences are also reflected in the stated party preferences of the different groups. In 2010, it was only among citizens in the lowest quartile of the family SES variable that the support for the left-wing parties was higher than the support for the right-wing parties (Figure A.28).

A similar pattern is also visible when examining the relationship between voter turnout of various socioeconomic groups and the overall difference in left–right support at the electoral district level. In doing so for the 2010 election, we find that a one percentage point higher turnout among individuals in the lowest quartile of the family SES distribution was associated with a 0.2–0.3 percentage points higher vote share difference between the left- and right-wing parties, holding voter turnout in other quartile groups constant (Table A.13). Admittedly, these latter results are purely correlational, but they fit well with the findings of some other recent studies using more credible identification strategies (Bechtel, Hangartner, and Schmid 2016; Finseraas and Vernby 2014). Our results provide support for the view that reforms that contribute to the reduction of the SES voting gap can help foster representational equality by increasing the vote share of leftist parties.

CONCLUSION

According to de Tocqueville ([1835] 2015) the main characteristic of democratic nations was their love for equality. Yet, inequality of political opportunity is still widespread in most developed democracies. Likewise, political participation tends to be highly stratified by socioeconomic status. When asked how to narrow the political opportunity gap, the standard response among political scientists has been to suggest improved educational opportunities. In particular, it has been suggested that increased schooling should help make up for the “considerable inequalities that originate in the family” (Neundorf, Niemi, and Smets 2016, 946). However, in the last decade this belief in the redeeming effects of education have come under increasing debate (Persson 2015).

The primary weakness of the existing literature is that it tends to treat education as a standardized commodity that affects all groups equally. Admittedly, there are signs that this is about to change. For instance, recently published studies by Campbell and Niemi (2016) on political knowledge, Neundorf, Niemi, and Smets (2016) on political interest, and Lindgren, Oskarsson, and Dawes (2017) on political candidacy all highlight the fact that education tends to be more important for politically marginalized groups. However, as far as we know, this is the first systematic study to investigate how the effect of education on voter turnout varies over family background.

By combining high-quality population data with a credible identification strategy, our study has high internal validity, although the generalizability of our findings to other countries and types of political participation is more difficult to assess. Nonetheless, given that an educational reform can be proved to have an effect on the voting gap in a relatively egalitarian country with high turnout as Sweden, it seems reasonable to assume that this could also happen in other countries. At this point, this is merely speculation on our side, and to investigate whether this is actually the case requires similar studies in other countries with different political systems and school systems. Another central avenue for future research is to provide a better understanding of the causal mechanisms at work. Toward this end, it also seems important to examine other additional potential heterogeneities. Just as the effect of education on turnout can depend on social origin, it can depend on other micro and macro factors.

To end on a more substantive note, the findings of this study are important because they provide support for the widespread, but increasingly contested, view that improved educational opportunities can help reduce political inequality. To be clear, education is not the universal solvent capable of dissolving all forms of existing inequalities. Our results do show, however, that carefully designed educational reforms constitute one option worth considering when discussing what to do about the political opportunity gaps that are currently threatening democratic legitimacy in many countries.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S0003055418000746.

1 In the Appendix we present a simple formal model detailing how education reforms can influence political inequality.

2 This section is based on the detailed description of the Swedish upper secondary school system and the school reform in 1991 provided by Hall (2012).

3 Although the reform was decided in 1991, the municipalities had until 1994 to implement the reform. Thus, the 1978 cohort was the first for which all vocational programs was of the three-year type.

4 As a sensitivity check we will, however, also estimate a flexible interaction model that allows the SES measure to be continuous.

5 For a similar empirical approach see Hall (2012).

6 By restricting the analysis to the individuals who began upper secondary school in the “correct” year, we increase the precision in our instrument and mitigate the risk that some individuals deliberately postponed their school start in order to get access to the longer vocational programs.

7 See Section A.2.2 in the Appendix for additional details on these registers and variables.

8 Hall (2012) sets the reform indicator to 0 for municipalities not offering any vocational training programs. However, students living in such municipalities could enroll in upper secondary schools in nearby municipalities. Therefore, for municipalities that lacked vocational training programs during the study period, we use the reform score for the municipality in which most students from the 1970 cohort (the cohort preceding the first treated cohort) attended a vocational training program.

9 The authors of a recent overview on the topic refer to income, education, and occupational status as the big three variables of SES measurement (NCES 2012, 13).

10 We provide a detailed description of the procedures we used to scan and digitize the election rolls in Section A.2.2 in the Appendix.

11 All individuals born between 1970 and 1974 are considered when calculating the family SES quartiles. The reason why the quartile groups in Table 1 differ in size is that the probability of enrolling in upper secondary education differs across groups.

12 Based on the results from a fully interacted model, which is mathematically equivalent to the split-sample model, we find that the differences in coefficients between groups are statistically significant at the 0.05 level in three out of six cases. These are Q2 versus Q1 (p = 0.024), Q3 versus Q1 (p = 0.011), and Q4 versus Q1 (p = 0.050).

13 However, the IV estimate for Q4 is very imprecisely estimated because of a weak first stage.

14 In the Appendix we show this more formally by decomposing the total reform effect into a return and a resource part (see Figure A.24).

15 In Table A.5 in the Appendix we show that the IV estimate of completing a three-year program for Q1 (11.75) when restricting the analysis to students in vocational programs is very similar to the corresponding IV estimate in the full sample.

16 Because information from the application register is not available for cohorts born before 1969, we condition the analysis on graduating (instead of enrolling, as is used in the main analysis) from upper secondary school in the placebo analyses. Placebo graphs for the remaining three quartiles are included in the Appendix.

17 The reason why the first stage is still estimable for cohorts born after 1978 is that the small number of individuals attending three-year programs but dropping out after two years are assigned a two-year degree in the education records.

18 Consequently, we should expect, and also find, a pattern of “mirroring” placebo effects that turns increasingly negative when we post-date the reform until the period +3 in which we estimate the reform effect on the last cohorts (born 1973–77) in which all students attended an upper secondary school system still in change. In the period +4 years the negative placebo effect decreases in size as more cohorts (born 1978 and later) that attended upper secondary school after the reform had been fully implemented are included in the estimations.

19 The knots are placed at the 5th, 22.5th, 50th, 72.5th, and 95th percentiles of the family SES indicator. In Section A.3.9 in the Appendix we present results using alternative numbers of knots.

20 For reasons of readability, the graphs have been trimmed at the 2.5th and 97.5th percentiles of the family SES distribution.

21 More precisely, we have assumed that there is no unmodeled treatment heterogeneity when family background has been accounted for.

22 The p-values for the overidentifying test for the five columns are, in turn, 0.51, 0.87, 0.20, 0.73, and 0.69.

23 In both cases, we also fail to reject the overidentifying restriction for all quartile groups at the conventional levels of statistical significance.

24 Admittedly, even if we find no evidence of remaining unmodeled heterogeneity within SES groups, it can still be the case that our model identifies different LATEs across SES groups. That is, if the individuals in Q1 who are affected by the instrument (the compliers) differ from those affected by the instrument in the other quartile groups, this could help explain the group variation in the IV estimates. Nonetheless, as long as the complier characteristics varying between groups are not causally prior to family background, the differences in IV estimates between groups should continue to have meaningful interpretation. They indicate how the total difference in the returns to one additional year of vocational education varies across groups.

25 It is also not possible to study the electoral effects of the reform using aggregated election results because the individuals affected by the pilot scheme constitute only a very small share of the overall electorate.

REFERENCES

Angrist, Joshua D., and Pischke, Jörn-Steffen. 2008. Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton, NJ: Princeton University Press.
Åslund, Olof, Grönqvist, Hans, Hall, Caroline, and Vlachos, Jonas. 2018. “Education and Criminal Behavior: Insights from an Expansion of Upper Secondary School.” Labour Economics 52: 178–92.
Bechtel, Michael M., Hangartner, Dominik, and Schmid, Lukas. 2016. “Does Compulsory Voting Increase Support for Leftist Policy?American Journal of Political Science 60 (3): 752–67.
Beck, Nathaniel, Katz, Jonathan N., and Tucker, Richard. 1998. “Taking Time Seriously: Time-Series-Cross-Section Analysis with a Binary Dependent Variable.” American Journal of Political Science 42 (4): 1260–88.
Berinsky, Adam J., and Lenz, Gabriel S.. 2011. “Education and Political Participation: Exploring the Causal Link.” Political Behavior 33 (3): 357–73.
Blanden, Jo, and Machin, Stephen. 2004. “Educational Inequality and the Expansion of UK Higher Education.” Scottish Journal of Political Economy 51 (2): 230–49.
Brand, Jennie E., and Xie, Yu. 2010. “Who Benefits Most from College? Evidence for Negative Selection in Heterogeneous Economic Returns to Higher Education.” American Sociological Review 75 (2): 273–302.
Campbell, David E. 2008. “Voice in the Classroom: How an Open Classroom Climate Fosters Political Engagement Among Adolescents.” Political Behavior 30 (4): 437–54.
Campbell, David E., and Niemi, Richard G.. 2016. “Testing Civics: State-Level Civic Education Requirements and Political Knowledge.” American Political Science Review 110 (3): 495–511.
Carneiro, Pedro, Heckman, James J., and Vytlacil, Edward J.. 2011. “Estimating Marginal Returns to Education.” American Economic Review 101 (6): 2754–81.
Cesarini, David, Johannesson, Magnus, and Oskarsson, Sven. 2014. “Pre-Birth Factors, Post-Birth Factors, and Voting: Evidence from Swedish Adoption Data.” American Political Science Review 108 (1): 71–87.
Converse, Philip. 1972. “Change in the American Electorate.” In The Human Meaning of Social Change, eds. Campbell, Angus and Converse, Philip E.. New York: Russell Sage Foundation, 263–338.
de Tocqueville, Alexander. (1835) 2015. Democracy in America. Vol. I and II . Redditch: Read Books Limited.
Dieterle, Steven G., and Snell, Andy. 2016. “A Simple Diagnostic to Investigate Instrument Validity and Heterogeneous Effects When Using a Single Instrument.” Labour Economics 42: 76–86.
Dinesen, Peter T., Dawes, Christopher T., Johannesson, Magnus, Klemmensen, Robert, Magnusson, Patrik, Nørgaard, Asbjørn S., Petersen, Inge, and Oskarsson, Sven. 2016. “Estimating the Impact of Education on Political Participation.” Political Behavior 38 (3): 579–601.
Finseraas, Henning, and Vernby, Kåre. 2014. “A Mixed Blessing for the Left? Early Voting, Turnout and Election Outcomes in Norway.” Electoral Studies 33: 278–91.
Freedman, David A., and Sekhon, Jasjeet S.. 2010. “Endogeneity in Probit Response Models.” Political Analysis 18 (2): 138–50.
Gidengil, Elisabeth, Wass, Hanna, and Valaste, Maria. 2016. “Political Socialization and Voting: The Parent–Child Link in Turnout.” Political Research Quarterly 69 (2): 373–83.
Grönqvist, Hans, and Hall, Caroline. 2013. “Education Policy and Early Fertility: Lessons from an Expansion of Upper Secondary Schooling.” Economics of Education Review 37: 13–33.
Hainmueller, Jens, Mummolo, Jonathan, and Xu, Yiqing. Forthcoming. “How Much Should We Trust Estimates from Multiplicative Interaction Models.” Political Analysis.
Hall, Caroline. 2012. “The Effects of Reducing Tracking in Upper Secondary School: Evidence from a Large-Scale Pilot Scheme.” Journal of Human Resources 47 (1): 237–69.
Hillygus, Sunshine D. 2005. “The Missing Link: Exploring the Relationship Between Higher Education and Political Engagement.” Political Behavior 27 (1): 25–47.
Kam, Cindy D., and Palmer, Carl L.. 2008. “Reconsidering the Effects of Education on Political Participation.” The Journal of Politics 70 (3): 612–31.
Lechner, Michael. 2011. “The Estimation of Causal Effects by Difference-in-difference Methods.” Foundations and Trends in Econometrics 4 (3): 165–224.
Lijphart, Arend. 1997. “Unequal Participation: Democracy’s Unresolved Dilemma.” The American Political Science Review 91 (1): 1–14.
Lindgren, Karl-Oskar, Oskarsson, Sven, and Dawes, Christopher T.. 2017. “Can Political Inequalities Be Educated Away? Evidence from a Large-Scale Reform.” American Journal of Political Science 61 (1): 222–36.
Mare, Robert D. 1980. “Social Background and School Continuation Decisions.” Journal of the American Statistical Association 75 (370): 295–305.
NCES. 2012. “Improving the Measurement of Socioeconomic Status for the National Assessment of Educational Progress.” https://nces.ed.gov/nationsreportcard/pdf/researchcenter/Socioeconomic_Factors.pdf.
Neundorf, Anja, and Smets, Kaat. 2017. “Political Socialization and the Making of Citizens.” Oxford Handbooks Online, 1–28. Published online February 2017.
Neundorf, Anja, Niemi, Richard G., and Smets, Kaat. 2016. “The Compensation Effect of Civic Education on Political Engagement: How Civics Classes Make Up for Missing Parental Socialization.” Political Behavior 38 (4): 921–49.
Nie, Norman H., Junn, Jane, and Stehlik-Barry, Kenneth. 1996. Education and Democratic Citizenship in America. Chicago: University of Chicago Press.
OECD. 2010. PISA 2009 Results: Overcoming Social Background. Paris: OECD Publishing.
Persson, Mikael. 2014. “Testing the Relationship Between Education and Political Participation Using the 1970 British Cohort Study.” Political Behavior 36 (4): 877–97.
Persson, Mikael. 2015. “Education and Political Participation.” British Journal of Political Science 45 (3): 689–703.
Persson, Mikael, and Oscarsson, Henrik. 2010. “Did the Egalitarian Reforms of the Swedish Educational System Equalise Levels of Democratic Citizenship?Scandinavian Political Studies 33 (2): 135–63.
Putnam, Robert D. 2015. Our Kids: The American Dream in Crisis. New York: Simon & Schuster.
Schlozman, Kay L., Page, Benjamin I., Verba, Sidney, and Fiorina, Morris. 2004. “Inequalities of Political Voice.” Task Force on Inequality and American Democracy. Washington D.C.: American Political Science Association.
Schlozman, Kay L., Verba, Sidney, and Brady, Henry E.. 2012. The Unheavenly Chorus: Unequal Political Voice and the Broken Promise of American Democracy. Princeton, NJ: Princeton University Press.
Tenn, Steven. 2007. “The Effect of Education on Voter Turnout.” Political Analysis 15 (4): 446–64.
Verba, Sidney, Schlozman, Kay L., and Brady, Henry E.. 1995. Voice and equality: Civic Voluntarism in American Politics. Cambridge, MA: Harvard University Press.
Verba, Sidney, Burns, Nancy, and Schlozman, Kay Lehman. 2003. “Unequal at the Starting Line.” The American Sociologist 34 (1–2): 45–69.
Wolfinger, Raymond E., and Rosenstone, Steven J.. 1980. Who Votes? New Haven: Yale University Press.