Hostname: page-component-77c89778f8-m42fx Total loading time: 0 Render date: 2024-07-17T05:11:20.902Z Has data issue: false hasContentIssue false

Polling on a Budget: Implementing Telephone Surveys in Introductory and Advanced American Politics Courses

Published online by Cambridge University Press:  14 January 2011

Jonathan Williamson
Affiliation:
Lycoming College
Rights & Permissions [Opens in a new window]

Abstract

Research suggests that student learning is enhanced when students are engaged through active learning strategies. In studying public opinion and polling, challenges include the provision of meaningful active learning environments when resources are limited. In this article, I discuss the design and implementation of telephone surveys as a teaching tool for use in introductory American politics as well as upper-level public opinion courses. I emphasize polling situations in which there are limited resources or no established survey research center is available. Topics include student involvement in survey design and implementation, logistics, and budgeting. Student satisfaction with participation in the survey process is also discussed.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2011

Teaching undergraduates about public opinion and polling is one part theory and one part application. Student learning of an applied topic, such as polling or survey research, is seemingly best accomplished when classroom discussion of how polling works is combined with actual student participation in the polling process. Involving students in the design and implementation of survey research offers them practical experience in myriad subject areas covered in a typical course. However, the question underlying this article is: in the absence of an established (and funded) survey-research center on campus, how can an instructor of a public opinion and polling course implement a survey research component into his or her course using only limited resources?

The limited prior literature on the use of polling in the classroom largely focuses on student participation in exit polls (Cole Reference Cole2003; Evans and Lagergren Reference Evans and Lagergren2007; Lelieveldt and Rossen Reference Lelieveldt and Rossen2009). It is easy to understand why. The technological challenges of conducting exit polls on a limited budget are more easily overcome than those involved in conducting telephone polling. While exit polling requires surmounting significant administrative hurdles—namely transportation—it can be executed with few more technological resources than those to which the typical faculty member has access in his or her office or in a campus computer lab. However, conducting exit polls is dependent on the election calendar and only exposes students to one less-common type of survey research.

Other survey modes also have limited use in the classroom setting. Mail surveys have well-documented and inherent limitations (Weisberg Reference Weisberg2005). In addition, the timeline for implementing a mail survey does not generally fit within the allotted time of a semester when factoring in survey design, printing, mailing, efforts to improve response rates, data entry, and analysis. Similarly, the logistics of in-person survey design (excluding exit polling) require resources that are far beyond the reach of most class-based polling. While Internet polling is an emerging possibility, technological challenges continue to provide steep hurdles for one-off projects.

In light of these challenges, the remaining survey mode is telephone interviewing. As I will discuss later, the telephone interviewing timeline works well with the academic calendar and, most importantly, is the predominant mode used in the field. Students who are learning applied concepts of public opinion research must understand the issues related to telephone survey methodology. The question remains, however: how can a faculty member without access to an established campus call center lead students in designing and implementing a telephone survey?

Beyond research about exit polling as a teaching tool, other articles on polling in the classroom offer little in the way of suggestions on how to overcome the logistical problems associated with telephone surveys, because either the authors work at an institution with an established survey research center (Hauss Reference Hauss2001), or because the work's primary concern is other than providing guidance on such issues (Jones and Meinhold Reference Jones and Meinhold1999). Indeed, the sagest advice I could find on the subject came from a source (McBride Reference McBride1994) that was so dated technologically as to be of limited practical use.

In each case, authors made the argument that students could learn better about the electoral process (Cole Reference Cole2003; Evans and Lagergren Reference Evans and Lagergren2007; Lelieveldt and Rossen Reference Lelieveldt and Rossen2009), research methods (Hauss Reference Hauss2001; McBride Reference McBride1994; Winn Reference Winn1995), or civic engagement (Evans and Lagergren Reference Evans and Lagergren2007) by participating in the survey research process. The lone naysayer argued that participation in survey interviewing does not reap benefits to citizenship, but the authors did not question the value of student participation in polling when teaching students about survey research or, more broadly, matters of public opinion (Jones and Meinhold Reference Jones and Meinhold1999).

Indeed, professors who use polling in their courses generally agree with a litany of scholars who suggest that learning can be enhanced through practical experience. Some simply identify their given classroom activities as “active learning,” a broad category that includes traditional in-class activities such as “instructor-led discussion” and role-playing (Powner and Allendoerfer Reference Powner and Allendoerfer2008; Wilson, Pollock, and Hamann Reference Wilson, Pollock and Hamann2007), debate (Bell, Mattern, and Telin Reference Bell, Mattern and Telin2007; Omelicheva Reference Omelicheva2007; Omelicheva and Avdeyeva Reference Omelicheva and Avdeyeva2008; Oros Reference Oros2007), and simulation (Shellman and Turan Reference Shellman and Turan2006). Others pursue more specific forms of active learning, such as experiential learning (Bennion Reference Bennion2006), community-action learning (Bell, Mattern, and Telin Reference Bell, Mattern and Telin2007), and service learning (Smith Reference Smith2006; Van Assendelft Reference Van Assendelft2008). In each case, they advocate supplementing or replacing traditional lecture-oriented, passive-learning models with active models that require students to put into practice what they are studying.

POLLING AT LYCOMING COLLEGE

I have conducted polling projects with students at Lycoming College since 2003. Most of the polling described here is carried out in conjunction with two courses I offer each fall: an upper-level course entitled “Public Opinion and Polling” and two sections of the introductory “U.S. Government and Politics” course.Footnote 1 These courses are scheduled each fall, so the polling that we conduct mostly pertains to elections at the congressional, state legislative, or local levels.

I, like many faculty members teaching such courses, do not have access to a computer-assisted telephone interviewing (CATI) lab for conducting surveys. In the long run, I hope to set up such a lab, but in the meantime, I have developed a process that enables my students to complete quality polling without the use of significant new-technology resources, which would require an up-front investment approaching $100,000 for the software, hardware, furniture, and infrastructure.Footnote 2

Absent such an investment, the most critical need for this type of endeavor is access to a phone bank. At my institution, two phone banks existed when I began the project. One was set up in the conference room adjacent to the college's development office and was used for fundraising phone-a-thons; the other was located adjacent to the career services office and was used for periodic surveys of alumni. In the first few years of our polling, we relied on access to these rooms to conduct our polls, which worked adequately, except for two significant problems. Because these rooms were multi-purpose, we were required to break down our phones and survey materials at the completion of each night to make the room available for the next day's use. An even larger inconvenience was the challenge of reserving the rooms for five evenings at a time for each survey. The best time to call people to gauge public opinion is also the best time to call to ask for money. Keeping money rolling into the college proved a higher priority than our polling, so access to the development office's phone bank was limited. As well, the room near career services was frequently reserved in the evenings by student organizations, and there was again some reluctance to allow us to tie up the room for significant periods of time.

Fortunately, with the help of a supportive dean and the convenient timing of new construction, vacant space became available that could be allocated to us on a full-time basis. The dean's office paid the limited costs required to establish a phone board that would support 24 lines, and we had access to as many phones as needed because the college still had a stock from when it had once supplied phones in each dormitory room. While temporary opportunities may exist on most campuses for using existing facilities for calling purposes, a permanent home is helpful for logistical purposes. Even if that location is simply outfitted with tables, chairs, and telephones, such a space is enough to field a survey.

On my campus, departments are charged for phone calls on a minimal per-call basis. This procedure serves to spread the campus's telephone costs across the various campus constituents. This per-call costing model usually works well, but given the incredibly large number of calls required to complete even a modestly sized survey, it places a large burden on the political science department's budget. Thus far, we have cobbled together a series of solutions whereby the costs are covered by the dean's budget or by using funding generated from outside commissioned research projects. Eventually, our goal is to build the annual average charges into the department's telephone budget, or, preferably, to eliminate the per-call costing model altogether, as it is simply an accounting shell-game.

The manpower for fielding the survey comes from the students in both my polling course and my introductory “U.S. Government and Politics” course. Enrollment for the polling course typically runs from five to 15 students, and each section of the introductory course generally enrolls 20 to 35 students. We usually arrange for 15 total hours of calling time for each survey and generally schedule these hours between 5:30 and 8:30 PM, Sunday through Thursday.Footnote 3 Upper-level students are required to complete eight hours of calling for each of two surveys for the semester, and completion of these hours makes up 15% of their course grades. Introductory-level students are required to complete four hours of calling in one of the two surveys, the completion of which accounts for 5% of their grade. (With two introductory-level sections, I generally assign the students from each section to one or the other survey for simplicity's sake.) Students who do not complete the assigned time receive a proportional reduction in that part of their overall grade.

In the absence of CATI capabilities, the students use old-fashioned written surveys. Prospective respondent names, phone numbers, genders (to help callers overcome the ubiquitous “Pat”), and other identifying information are printed on adhesive labels. When called, non-respondents are coded on these labels based on refusals, non-answers, answering machines, bad numbers, and so on. When a respondent begins answering questions, the student transfers the relevant label to the written survey and begins to mark responses.

There are three major problems with printed surveys compared to computer-aided interviewing. First, with written surveys, there is simply more opportunity for error. Because interviewers must mark down responses and those doing data entry must code those entries into statistical software (SPSS), there is added opportunity for error (in comparison to a single-step process), not to mention significant extra time expended. In addition, interviewers can introduce error by not accurately following the flow of conditional responses through the survey. Finally, student interviewers often have difficulty understanding that with close-ended responses, only single-response options may be marked. In each of these cases, computer-aided interviewing is superior. Even given funding realities, each of these problems can be overcome with careful attention to training and supervision.

Each year, I select two trusted students from among those who have previously taken the course to serve as student supervisors. The supervisors' primary task is to supervise the call center during times I cannot be there. I am fortunate to live close to my institution, so I often get the students started in the evening, go home to my family, and return to wrap up the night's calling. During the calling shift, the student supervisor is charged with follow-up training of callers on any issues that emerge, answering any questions that interviewers or respondents might have, and generally ensuring that interviewers stay on task and not become distracted by their peers. In addition, student supervisors are often paid to enter data as necessary.

Final costs for the survey process include printing, telephone charges, the call list, the adhesive labels, and the student supervisors' wages, as well as other miscellaneous expenses. Printing charges primarily consist of the costs of printing the survey scripts. On my campus, the printing department will print 600 copies of a four-page script for about $60. Telephone charges can run $600 to $900 (the most expensive cost yet encountered) per survey, but as mentioned, this is an internal campus charge; the actually marginal cost to the college is much lower. The list we purchase for registered voters or a subset of voters in a congressional district costs us $190 for 14,000 names, enough to give us a fresh sample for two surveys. While adhesive labels may sound like a miscellaneous cost, a box of 6,000 labels costs about $55. Finally, if the student supervisors total 15 hours in call center work and 10 hours in data entry work, their wages would total approximately $180 at the campus student employee pay rate of $7.25 per hour. Depending on how a campus calculates telephone charges, the remaining costs can often be borne by a department's budget. In our case, I have sought supplemental funding directly from the dean of the college and have also used money from outside commissioned research.

As for the process of designing and implementing the surveys, each fall, we conduct two surveys timed to the academic and election calendar. The first survey usually falls around the fifth week of the semester. For my public opinion and polling class, I schedule the first poll after we have addressed the definitions for and the sources and foundations of public opinion on a theoretical level and have begun to discuss methods for measuring public opinion. Immediately following the implementation of the first survey, we delve into the technical aspects of survey design, including topics such as sampling and questionnaire design. The first survey gives upper-level students a foundation upon which to build these more technical concepts.

Timing of the first survey around the fifth week of the semester also works well for the introductory students. By the time of the first survey, these students have dealt with a range of subjects including elections, voter turnout, political parties, and public opinion. The lessons they learn in the classroom are then reinforced with the practical experience afforded by the telephone interviews.

The timing of the second survey falls approximately one month after the first survey, usually in mid-October, about three weeks prior to election week. The gap between the two surveys allows time for the upper-level students to design the second poll. Although I write the first survey, the second survey is a collaborative effort. We begin with an assignment in which students are required to submit ten questions they have written and think should be included in the survey. The questions should be different than those in the first survey; having previously administered that instrument, the students generally have a decent idea of how the questions should read. I ask students to turn in their questions electronically and then create a file of all the questions, organized by subject area. The next two weeks of class are spent narrowing down and rewriting the questions. In a class of 15 students, we start with 150 questions and whittle the list down to 15 to 20 questions—this length of poll works well for the type of survey we usually administer, given that we always include a base set of questions from the prior survey. Although writing a survey “by committee” is generally not advisable, I have found that working with the class to determine the wording of each question, establishing question ordering, and setting up exhaustive and exclusive response categories conveys the multitude of concepts involved in survey design much better than traditional lecture or discussion techniques.

I also use class time for training the students in both courses. The training becomes an opportunity to discuss such topics as interviewer-introduced measurement error (Weisberg Reference Weisberg2005) in the upper-level course or question wording (which has already been covered in more depth in the upper-level course) in the introductory course. As a class, we read through the survey script, role play, discuss appropriate procedures and expectations, and spend time helping students overcome their reservations about calling strangers.

On the issue of sampling, I initiate a discussion in the public opinion class about the strengths and weaknesses of choosing random digit dialing (RDD) samples versus registration-based sampling (RBS). I have typically chosen RBS for our polls to extend our limited resources further. With RBS, incidence rates (Green and Gerber Reference Green and Gerber2006) are higher and fewer questions are needed, because the data come with not only names and phone numbers, but also location information, party registration (when available), voting history, and other demographic information. Such lists are readily available from professional organizations and are surprisingly inexpensive, especially if an academic rate is available.

Once the calling is complete, we spend time in both courses reviewing how the process went. Students enjoy focusing on the “war stories” of their time on the phones: how some respondents were rude and others lonely, what they thought about respondents' patterns of answers, or how many respondents they were able to interview. I encourage those conversations in class so that I can draw out several broader lessons that are relevant to the course, such as what factors affect response rates, the implications of different types of people responding at different rates, the level of sophistication or consistency in responses, and the relationship between core beliefs and opinions.

The next step is data entry. For the first survey, my student supervisors and I enter the data as quickly as possible. Generally, we can complete data entry by the Monday following the last night of calling on Thursday. I then clean the data and establish weights in time for a discussion of the results in class on Tuesday or Wednesday, a process that is by no means quick by professional standards but is doable, even with limited resources. By the time of the second survey, I have exposed the upper-level students to topics such as data management and coding, so they participate in the data entry process during class time to reinforce those lessons.

Finally, in both courses, we spend time analyzing the results at a level appropriate to the particular course. In the introductory course, I generally must walk through frequencies for each question and demonstrate the usefulness of cross-tabulations on key results. In the upper-level course, we spend significantly more time looking at the results. These students have a vested interest in the outcome and usually drive the analysis. They usually do not have a background in statistical software packages other than the data entry process we have just completed, so I conduct data analysis in class to demonstrate how their questions about the results can be translated into data analysis and how to interpret the results. Once again, we rarely go beyond descriptive statistics and cross-tabulations, but for some students, that process sparks an interest in pursuing additional training in quantitative analysis. When the survey results have broad public interest, I like to produce a press release, which the college's public relations office distributes to the appropriate media.Footnote 4 I keep the students informed about where their work appears in the media to reinforce the lesson that polling can be a powerful tool and plays a critical role in the interplay between candidates, elected officials, the public, and the media in modern American political society.

STUDENT SATISFACTION

What do the students think of their polling experience? At the end of the fall 2008 semester, both groups of students were given short surveys that asked them to reflect on the polling section of the course. In addition to asking students what aspects of the polling and public opinion section of the course were especially good and what they would change, students were asked to rate the polling in terms of whether it enhanced their understanding of public opinion, their understanding of the 2008 election, and their interest in the election. They were also asked whether they enjoyed participating in the polling and whether polling should be a component of the course in the future.

Among the introductory students, the majority of responses to the question about what they saw as particularly good about the polling section of the course fell into three broad categories. The most common response indicated an appreciation for the opportunity to talk to respondents and hear the range of opinions existing on the polled topics. A typical response was, “I enjoyed understanding the differences between people and having the ability to learn about various people's beliefs” or “[I enjoyed] interacting with the community about politics.” Other responses expressed that the polling process helped them to understand the election more broadly: “The section helps supplement understanding of the whole process, especially during election time.” A few indicated that they enjoyed the entire process: “The polling was a lot of fun and the results were interesting to analyze afterwards.” Although a few students did not like the polling at all: “[I enjoyed] nothing” and “[I] didn't really enjoy it,” others pointed out less academic but still positive impressions of the involvement: “[It was] fun listening to rude people” and “[It was good to] learn to deal with crazy people on the phone. Also to know how it feels to be yelled at, and to not yell at telemarketers.”

Students in the public opinion course reiterated what the introductory students said about finding the process interesting and their enjoyment of learning directly from the public about the range of opinions that exist. Because the upper-level students were also involved in the design and analysis of the survey, they also had comments about those areas. Typical responses included, “I liked the analysis of the polls. It was interesting to look at people's opinions and really look at the crosstabs,” and “[I enjoyed] the making of the second poll and the people actually answer[ing] the polls we wrote.”

When asked what changes could be made in the polling section of each course, students' most common response, interestingly, was “nothing.” Both groups of students groused about the number of calling hours required. Among the introductory students, other less-common suggestions included eliminating polling in lieu of some other subject or activity, providing more time to analyze the data, and offering different questions or changed wording on the survey. Students also expressed concerns about the level of cooperation from prospective respondents. Upper-level students frequently expressed a desire to do more polling work and less theoretical work, or to make changes in particular procedural areas.

Table 1 indicates the results of the quantitative questions. Generally, both groups were positive about the experience, with the upper-level students predictably more consistently enthusiastic than the introductory students. More specifically, majorities of each class gave high ratings to the polling for its ability to enhance their understanding of public opinion, their understanding of the election, and their interest in the election. All classes were more evenly divided in terms of whether students enjoyed the polling section of the course, but an overwhelming majority of students expressed that the polling should continue to be a part of the course in the future.

Table 1 Student Survey Responses, by Percent

Note. For Introduction class, n = 40; for public opinion class, n = 7. For survey responses, 1 = low, 5 = high.

CONCLUSION

I have taught each of these courses without this active learning component. Students exposed to the polling process not only gain a much better grasp of the concepts underlying the process, but they also have a better feel for public opinion from participating in the survey research process. When I discuss ideas like ideological inconsistency in public opinion, they now have an opportunity to connect that concept to “that little old lady that wanted to talk my ear off, but just didn't get it.” In sum, I have found that students required to participate in the survey research process have (1) a better understanding of the polling process, (2) a better understanding of how the opinion of a limited number of respondents can reflect the opinions of a related population, and (3) a richer understanding of the role of public opinion in the American political system.

Resources are tight at every institution. I have demonstrated here that it is possible to translate what a professional or otherwise well-funded organization would be able to accomplish into an academic setting on a shoestring budget. The result of doing so not only improves student learning and engagement, but also carries broader benefits to the institution. To the degree possible, choosing issues or campaigns that might garner media attention to the institution through press releases or interviews is a good idea. The appeal of that institutional attention just may be the argument that is required to set up the shoestring budget in the first place.

Footnotes

1 In addition to this class-based polling, I have periodically conducted polls as part of commissioned projects through the college's Center for the Study of Community and the Economy, which I founded with a colleague from the business department. We complete commissioned research projects on behalf of local governments or nonprofit agencies devoted to community and economic development. When this research requires the use of telephone surveys, we use the polling facilities described in this article. Most of these projects include hiring students, who are further exposed to research experience, and the funding from these projects helps to supplement the limited funding I receive to conduct the class-based polling.

2 While open-source software packages such as queXS are available as a substitute for otherwise expensive closed-source options, the costs of hardware and other infrastructure have kept the call center out of budgetary reach. Use of an established campus computer lab for a call center is also impractical, because the lab would be unavailable for other uses for extended periods of time. While online survey tools, including open-source options like Limesurvey or low-cost options like Survey Monkey, could serve as a substitute for some parts of the learning experience, students benefit greatly from directly interacting with respondents by telephone.

3 Ideally, for the purposes of sampling, a survey organization would staff the call center throughout the day and evening to facilitate connection with a broad array of respondents who might not be available during the traditional evening calling hours. However, given our limited resources and the restrictions on how much time callers can be supervised, we only call during the evening hours. Many professional survey organizations also deviate from the ideal with time-sensitive polling.

4 We have benefited from two sequential, highly contentious races for the U.S. House of Representatives for which our poll was one of only two publically available surveys. Students were enthused by the media attention that our results received not only locally, but also nationally.

References

Bell, Sarah, Mattern, Mark, and Telin, Mike. 2007. “Community-Action Learning.” Journal of Political Science Education 3 (1): 6178.CrossRefGoogle Scholar
Bennion, Elizabeth A. 2006. “Civic Education and Citizen Engagement: Mobilizing Voters as a Required Field Experiment.” Journal of Political Science Education 2 (2): 205–27.CrossRefGoogle Scholar
Cole, Alexandra. 2003. “To Survey or Not to Survey: The Use of Exit Polling as a Teaching Tool.” PS: Political Science and Politics 36 (2): 245–52.Google Scholar
Evans, Jocelyn, and Lagergren, Olivia. 2007. “See You at the Polls: Exit Polling as a Tool for Teaching Research Methods and Promoting Civic Engagement.” Paper presented at the APSA Teaching and Learning Conference, Charlotte, NC, February 9–11.Google Scholar
Green, Donald P., and Gerber, Alan S.. 2006. “Can Registration-Based Sampling Improve the Accuracy of Midterm Election Forecasts?Public Opinion Quarterly 70 (2): 197223.CrossRefGoogle Scholar
Hauss, Charles. 2001. “Freshman Conduct a National Survey.” PS: Political Science & Politics 34 (2): 306–07.Google Scholar
Jones, Lloyd P., and Meinhold, Stephen S.. 1999. “The Secondary Consequences of Conducting Polls in Political Science Classes: A Quasi-Experimental Test.” PS: Political Science & Politics 32 (3): 603–06.Google Scholar
Lelieveldt, Herman, and Rossen, Gregor. 2009. “Why Exit Polls Make Good Teaching Tools.” European Political Science 8 (1): 113–22.CrossRefGoogle Scholar
McBride, Allan. 1994. “Teaching Research Methods Using Appropriate Technology.” PS: Political Science & Politics 27 (3): 553–57.Google Scholar
Omelicheva, Mariya Y. 2007. “Resolved: Academic Debate Should Be a Part of Political Science Curricula.” Journal of Political Science Education 3 (2): 161–75.CrossRefGoogle Scholar
Omelicheva, Mariya Y., and Avdeyeva, Olga. 2008. “Teaching with Lectures or Debate? Testing the Effectiveness of Traditional versus Active Learning Methods of Instruction.” PS: Political Science & Politics 41 (3): 603–07.Google Scholar
Oros, Andrew L. 2007. “Let's Debate: Active Learning Encourages Student Participation and Critical Thinking.” Journal of Political Science Education 3 (3): 293311.CrossRefGoogle Scholar
Powner, Leanne C., and Allendoerfer, Michelle G.. 2008. “Evaluating Hypotheses about Active Learning.” International Studies Perspectives 9 (1): 7589.CrossRefGoogle Scholar
Shellman, Stephen M., and Turan, Kursad. 2006. “Do Simulations Enhance Student Learning? An Empirical Evaluation of an IR Simulation.” Journal of Political Science Education 2 (1): 1932.CrossRefGoogle Scholar
Smith, Elizabeth S. 2006. “Learning about Power through Service: Qualitative and Quantitative Assessments of a Service-Learning Approach to American Government.” Journal of Political Science Education 2 (2): 146–70.CrossRefGoogle Scholar
Van Assendelft, Laura. 2008. “‘City Council Meetings Are Cool’: Increasing Student Civic Engagement through Service Learning.” Journal of Political Science Education 4 (1): 8697.CrossRefGoogle Scholar
Weisberg, Harbert F. 2005. The Total Survey Error Approach: A Guide to the New Science of Survey Research. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Wilson, Bruce M., Pollock, Philip H., and Hamann, Kerstin. 2007. “Does Active Learning Enhance Learner Outcomes? Evidence from Discussion Participation in Online Classes.” Journal of Political Science Education 3 (2): 131–42.CrossRefGoogle Scholar
Winn, Sandra. 1995. “Learning by Doing: Teaching Research Methods through Student Participation in a Commissioned Research Project.” Studies in Higher Education 20 (2): 203–14.CrossRefGoogle Scholar
Figure 0

Table 1 Student Survey Responses, by Percent