Hostname: page-component-5c6d5d7d68-qks25 Total loading time: 0 Render date: 2024-08-16T16:40:32.903Z Has data issue: false hasContentIssue false

Peer Evaluation in the Political Science Classroom

Published online by Cambridge University Press:  18 October 2011

Michael Baranowski
Affiliation:
Northern Kentucky University
Kimberly Weir
Affiliation:
Northern Kentucky University
Rights & Permissions [Opens in a new window]

Abstract

When student presentations are offered in the classroom, instructors are most likely to be concerned about the extent to which students pay attention and learn from the experience of not only giving a presentation, but about what students learn as an audience. After devising a peer-evaluation instrument for students to evaluate their classmates' presentations, over the course of two years and 10 courses, we surveyed students to determine the effectiveness and usefulness of peer evaluations. We found that students are more likely to pay attention, gain a different perspective on the presentation experience, and be more engaged in the presentation when they evaluate one another.

Type
The Teacher
Copyright
Copyright © American Political Science Association 2011

Why have students do presentations? What is the pedagogical objective? Some instructors would say that presentations help students learn to explain concisely concepts before an audience—a skill that may be useful after they leave school. Aside from this, forcing students to prepare for and present may also help them consider their work anew, or for members of a group, to gain a more holistic view of their project. These goals are worthwhile; so significant, in fact, that we have chosen not to question the potential value of presentations to the presenter. Instead, our focus is on the value of presentations to the student audience.

In assigning student presentations, instructors are likely to have witnessed some poor soul at the front of the class vainly attempting to engage a semi-comatose student audience. After seeing this more times than we care to admit, we decided that something more than encouraging students to “pay attention” might be helpful. We tried various approaches. Initially, in a post-presentation Q & A, we tried rewarding students who asked questions of presenters with participation credit, but this seemed ineffective—as the bulk of the questions came from the top few students (as always) who are likely to pay attention in the first place. Another shortcoming of the Q & A session was that many of the questions seemed to be very pedestrian and indicated that students had not really been listening and were asking whatever question happened to pop into their head just to score participation points.

What we needed, we thought, was some way to help focus the students' attention throughout the bulk of the presentations and to get them to think about the material being presented in a systematic way. Of course, as instructors, we already did this, because we had to evaluate presentations. This was our “aha moment”—if evaluating presentations kept us focused, this task might keep our students focused, too.

After this discovery, the next step was to create an evaluation instrument. We felt that giving students a systematic way to evaluate presentations was critical, as students generally have little experience in critiquing others' work. Expecting students to evaluate students' live presentations without direction would be unrealistic. The evaluation form we developed is included in appendix A, which also appears as part of the course syllabus. Essentially, students know from the start of the semester how their presentations will be evaluated.

The criteria listed in the evaluation form correspond with the criteria we use to grade presentations. We hoped using this approach, student evaluators would not only remain focused by grading the work of others, but they would also develop a clear understanding of how their own work would be evaluated. Unfortunately, the first student presenters do not have the advantage of having evaluated others' work before they give their presentations. Allowing students to choose when they would like to schedule their presentation levels the playing field using this incentive: those who present first are evaluated a bit more leniently; but those students who present later have the advantage of having seen presentations, evaluated other presentations, and have more time to prepare for their own presentation.

Providing an incentive for students to evaluate carefully their peers' presentations is important so we make the evaluations part of an overall course assignment grade. In the peer evaluation included in appendix A, the value given to the evaluation is 10 assignment points, although this number varies depending on the grading structure of the class. As a general rule, we structure the evaluations so they constitute from 3% to 5% of the total course grade, which is an amount we believe is just enough to make the students take this assignment seriously. Basically, as long as students attend and evaluate all of the presentations, they earn full credit for the assignment.

We also did consider using peer evaluations as part of the students' presentation grade, but we decided against this for several reasons. First, and perhaps most important, was that we had never done this before. Although we regularly try new things that will affect student grades, a new assignment involving peer grading should, we felt, be approached with caution. Second, the research on peer evaluations indicated that there might be some systematic biases, especially with untrained evaluators (see, for example, Omelicheva Reference Omelicheva2005; Selinow and Treinen Reference Selinow and Treinen2004; Campbell et al. Reference Campbell, Mothersbaugh, Brammer and Taylor2001). Finally, we sensed that many students might take issue (and perhaps rightly so) with peer assessment being a part of their grade.

FINDINGS

We conducted peer evaluations of student presentations in 10 300-level political science classes over four semesters (spring 2007, fall 2007, spring 2008, and fall 2008). In each class, students completed a presentation evaluation survey after the final student presentation of the semester. We received 193 evaluation surveys. The instrument, along with response percentages for each question, is included in appendix B.

Given that this analysis included multiple classes, we initially determined if there were any significant differences in responses between classes. A one-way analysis of variance indicated that there were no significant differences between mean responses (at the .05 level of significance).

Question two on the survey is the most direct measure of student engagement, and the result—73% of students agreed that completing evaluations made them pay more attention—was very encouraging. Although 27% reported no gain in attention due to peer evaluation, we suspect that the better students would likely have paid reasonably close attention anyway. To get a better sense of whether this may be the case, we entered a subset of a complete data set (N = 63). As we expected, instructor and student grades were highly correlated (r = .740). When analyzing instructor and student grades, we found that instructor-assigned grades were only slightly higher than student-assigned grades (instructor mean/median = 84.3/85; student mean/median = 87.5/88.2). The data also indicate that there is less variability in student-assigned grades than instructor-assigned grades (standard deviation instructor = 7.27; student = 5.3). Although the difference between instructor and student grades is not substantively large, it is statistically significant (p = .001).

We also asked students to consider whether they felt they got “a different perspective” in regard to presentations (question three). As a result, 59.1% of students responded that the evaluation process gave them a different perspective. Students indicated they gained a different insight into the process, rather than just sitting through presentations without having any objective or direction as an audience member. In future versions of the survey, it would be worthwhile to include a box beneath this question in which students could explain what different perspective they gained.Footnote 1

Question four examines the extent to which evaluating the presentations of others helped students understand presentation expectations. We had initially thought to phrase the question, “Grading other student presentations helped me better prepare my own presentation,” but realized that this would not apply to first-day presenters. Again, a sizable majority of students responded affirmatively, with 73.6% agreeing that the evaluation process gave them a better understanding of expectations.

Although we chose not to make peer-evaluations part of the grading process, we were curious to know what students would think about this idea. As the responses to question five indicate, students were not at all interested in their peers' evaluations determining their grades. Even when put in a fairly mild form (“student critiques should be considered by the instructor” as opposed to, “student critiques should be part of the presentation grade”), only 46.1% of students were in agreement, and this item had the lowest level of strong agreement (9.8) on the survey. A sizable 28% were neutral, possibly due to ambiguity concerning what instructor “consideration” of peer evaluations entails. Because grade differential between instructor and student grades is statistically significant, students would, ironically, benefit from student evaluation grades counting toward their overall course grade.

We suspected that students would want to know how their peers evaluated them, a suspicion supported by responses to question six. More than 80% of students thought that this feedback would be useful to them, and only 5% did not see any value in it. We did not provide this feedback for several reasons. First, because of logistics—student presentations were the finale of our courses—we could not determine a simple and secure method of providing this feedback to them. Second, we decided not to provide peer-evaluation data that, after reviewing the comment sheets, would not be useful. Students might have an interest in seeing their evaluations (as most people do), but our admittedly subjective judgment was that it would not be very helpful to the students (especially considering the effort involved in compiling the data and ensuring anonymity of the evaluators). Finally, when considering the correlation between how students graded one another and the grade of the instructor, the evaluation grades were similar.

Question seven asks students to consider the peer evaluation learning experience. As instructors, by nature we seek out ways to improve student learning. With 63.3% of the students indicating that the peer evaluation was worthwhile, peer evaluations appear to be a useful exercise for increasing students' attention during presentations. Although we would have liked this percentage to be higher, we suspect that many of students were uncomfortable judging and being judged by their peers, thus dampening their enthusiasm for the project. (This seems to be a common theme in responses to the open-ended comments questions that concluded the survey.) Note that many students did not have an opinion either way on this—29.5% were neutral.

Our final question gave students the opportunity to make any comments about the whole evaluation process. Responding was not mandatory, and considering that the survey was the final activity of the semester, we were fairly pleased that nearly 33% of the students commented. The full text of all comments is included in appendix C.

Many of the students' comments confirm findings already discussed, especially their belief that the exercise caused them to pay more attention during presentations. A few students expressed concern with the fairness of peer evaluations, although they were well aware that these evaluations would not be part of their presentation grade. This concern reinforces our reluctance to incorporate student evaluations into the formal grading process.

CONCLUSION

The data we gathered, which covers multiple courses, with multiple professors, over multiple semesters, largely supports the use of peer evaluations of student presentations. Having used student presentations as a pedagogical tool, we were concerned that learning started with the student preparing for the presentation and stopped with the student giving the presentation. Peer evaluation, however, provides a simple and effective way of increasing student involvement in others' presentations.

APPENDIX A: Presentation Evaluation Sheet

Presentation Evaluations

When evaluating presentations, consider the presenter's:

  • command of the material/knowledge of the subject

  • clarity in presenting material

  • ability to engage the audience

  • handling of questions

  • ability to stay within the allotted timeframe (10 minutes)

Comments and an overall score are required for each presentation you evaluate.

Evaluations will count as a 10-point assignment, with points deducted for each presentation evaluation you miss or don't fully complete, not counting your own (for instance, if there are 18 presentations and you fully evaluate 15 of them, your grade will be 15/18 which translates into an 8.4 assignment grade) (really it's 8.33, but I'm rounding up because I'm generous).

Note: as you probably guessed, this isn't the whole thing—we added however many rows we needed to accommodate the number of student presenters there were.

APPENDIX B: Peer Presentation Evaluation Assignment Survey (with summary data)

  1. 1. Course in which you evaluated presentations

  2. 2. Because I was required to evaluate presentations, I probably paid more attention to student presentations than I would have otherwise.

    mean = 2.21 stdev = .957 n = 193

  3. 3. Evaluating other student presentations gave me a different perspective on presentations.

    mean = 2.42 stdev = .933 n = 193

  4. 4. Grading other student presentations helped me better understand what's expected in presentations.

    mean = 2.16 stdev = .935 n = 193

  5. 5. Student critiques should be considered by an instructor when assigning a student's presentation grade.

    mean = 2.76 stdev = 1.096 n = 191

  6. 6. Receiving multiple critiques of my presentation from other students would be useful to me in improving my future presentations.

    mean = 1.98 stdev = .763 n = 193

  7. 7. The presentation critique was a useful learning experience.

    mean = 2.31 stdev = .887 n = 193

  8. 8. Please use the space below for any comments you have about the peer presentation evaluation assignment. (see Appendix C for complete list of comments)

APPENDIX C: Student Comments

  1. 1. It's more nerve wracking to know peers are grading you.

  2. 2. It forced me to pay critical attention and I would be interested to see my peer evaluations.

  3. 3. I was first so I didn't learn from other presentations.

  4. 4. The critique could have allowed for more feedback on the actual information.

  5. 5. I enjoyed the papers and presentations over a exam!

  6. 6. It helped to see how others went about their presentation to know what exactly was expected of me. I made a PowerPoint because of seeing others.

  7. 7. Overall good idea, I paid a lot more attention because of them.

  8. 8. I wish you would have made clear what you felt were the most important criteria or if we should treat all five equally. I also wasn't sure if I was just trying to grade the presentation, the info, or also their effort and paper content. I was really stressed by doing this. Some people worked really hard on their papers but their presentation style wasn't great. I felt torn. I know how hard they worked. I don't want to give them a bad grade. In the future, I would ask for students' comments, but not ask them to give a grade.

  9. 9. It made it (grading) a little tougher, but it's okay. It's good exercise for learning to grade/evaluate fairly work that is opposite of my own opinion.

  10. 10. Students are more cruel than teachers.

  11. 11. Sweet.

  12. 12. Made me pay attention more.

  13. 13. I think students tend to highball peer evaluations; in other cases, students may pay less attention than they should to the presentation and give an inaccurate grade. Because of this I don't think professors should use student evaluations to determine grades.

  14. 14. I do think individual evaluations help us to pay closer attention, but it also may make us feel more inclined to criticize and nitpick a presentation instead of simply listen and enjoy.

  15. 15. I feel that some people may judge the presentations based on their preconceived notions about the presenter. Based on this, I hope the student critiques will not be used in the presenter's overall grade.

  16. 16. Nervousness in the first two minutes should not count. Good exercise; should not do on final day.

  17. 17. I think it's unfair that student's get to critique presentations because it should be solely the teacher to critique us since they're the ones giving the students the grade.

  18. 18. Public speaking is hard to do, it is even hard to judge, so, overall, I think peer assessment should speak to organization, clarity in presentation, and understanding of the topic.

  19. 19. Peer evaluation helps students learn and evaluate better.

  20. 20. I enjoyed researching and getting a better understanding of my topic.

  21. 21. I liked the format of the evaluation sheet, it allowed me to make notes to myself, it made it much easier to decide on the overall score.

  22. 22. Sometimes it's really useful, but students don't want to contribute to their friends' bad grade so they don't always honestly critically evaluate.

  23. 23. As to #5, there is always karma to keep in mind, and the hope for reciprocity that has the potential to inflate scores.

  24. 24. I think that the presentation attendance should be split. Only the people presenting that day should have to be present.

  25. 25. I was here for more presentations because of evaluations but I don't know if I paid more attention.

  26. 26. The instructor should use student comments as a guide, but should not base the final grade on them.

  27. 27. I like to be in the audience …

  28. 28. Puts additional incentive to do well.

  29. 29. Getting peer comments back would be helpful.

  30. 30. I think it's very useful.

  31. 31. I think that it was a good exercise to make everyone pay attention and give presenters the respect of listening.

  32. 32. More complete evaluation standards would be useful to eliminate vagaries in the numerical grading process.

  33. 33. Although I value evaluating presentations and having others evaluate mine, at times it is difficult to pay attention to the content of the presentation versus the presentation style/performance.

  34. 34. I thought the idea for criticizing presentations was great. It allowed personal gain for me in seeing what was good and what was not. I believe students should get a say in the grade. Who knows the effectiveness of the presentation more than the audience?

  35. 35. I would rather have the professor give the grade rather than students, for the students do not really understand what is always going on in the presentation.

  36. 36. I think it really helps to make students pay attention more and also helps those giving the presentations to know what interests their peers and makes for an interesting presentation.

  37. 37. I thought the peer review was great because it gave you a good perspective on what makes a good presentation and what people think about it.

  38. 38. When you have to grade people, you do pay more attention.

  39. 39. Student critiques can be biased and are subjective and should not be used to lower a student's grade.

  40. 40. Tell people what to look for if considered in grade.

  41. 41. Great time evaluating, critiquing, presenting.

  42. 42. Peer critiques would be better if outline was provided for each category.

  43. 43. I felt more stressed knowing that my peers were evaluating me. It overshadowed my PowerPoint.

  44. 44. This is a good way to encourage participation. However, I don't want other students determining my grade. Only the instructor.

  45. 45. Comments from other students would be helpful to know.

  46. 46. I hope that the student evaluations were greatly considered with the grading of the presentations.

  47. 47. The peer evaluations helped, but it would be important for the instructor to take into account what the students have said.

  48. 48. It's hard to give someone a bad grade. So even if they were really bad I couldn't give them a low grade. Sometimes the grade given does not accurately represent grade earned because of that.

  49. 49. The student critiques would be great to improve presentations but should be of little to no consideration when the grade is being given by the professor.

  50. 50. Student critiques should be considered by the instructor but should not be given a significant amount of weight. Critiques will be helpful as far as knowing how to improve future presentations.

  51. 51. I do think presenting and evaluating is an important tool needed for students. This is something that can prepare you for numerous things in the future.

  52. 52. I could better answer question six after getting critiques back.

  53. 53. Great way to interact and get valuable feedback.

  54. 54. At first I thought it would be tedious, but I gained a lot of insight from the assignment.

  55. 55. It was useless really. I graded poorly if bored and it made me aggravated.

  56. 56. I don't think people, for the majority, write useful comments.

  57. 57. I thought it was very helpful.

  58. 58. Seriously, who are some of these people today? Are they even in our class?

  59. 59. My brain has run out of senseless survey comments, my apologies.

Footnotes

1 Actually, it might be a good idea to include such a comment box after all of the evaluation scale questions, especially if we move the instrument online, which we are considering.

References

Campbell, Kim Sydow, Mothersbaugh, David L., Brammer, Charlotte, and Taylor, Timothy. 2001. “Peer versus Self Assessment of Oral Business Presentation Performance.” Business Communication Quarterly 64 (3): 2342.CrossRefGoogle Scholar
Omelicheva, Mariya Y. 2005. “Self and Peer Evaluation in Undergraduate Education: Structuring Conditions That Maximize Its Promises and Minimize the Perils.” Journal of Political Science Education 1 (2): 191205.CrossRefGoogle Scholar
Selinow, Deanna D., and Treinen, Kristen P.. 2004. “The Role of Gender in Perceived Speaker Competence: An Analysis of Student Peer Critiques.” Communication Education 53 (3): 286–96.CrossRefGoogle Scholar