How do I make the most of student evaluations?
Prepared by Micah Meixner Logan
Teaching, Learning, and Professional Development Center
Instructors often struggle with how to handle and respond to student evaluations or student ratings of instruction. What should I make of that one seemingly random comment? Will students punish me for a difficult course? Are student evaluations reliable? Do they even matter?
These are just a few of many questions that can run through your head as you prepare to read student evaluations. While effectively understanding and dealing with student feedback is not always an easy task, this paper attempts to provide some practical strategies for making the student evaluation process more meaningful and beneficial to both you and your students.
Common Concerns or Misconceptions
One of the most common concerns and complaints about the student evaluation process is whether or not student evaluations are reliable or valid measures for evaluating teaching. However, despite these concerns, the literature on the student evaluation process predominately finds that well-designed and well-implemented student evaluations are in fact reliable and valid measures of teaching effectiveness (Gallagher, 2000; Ory, 2001; Theall and Franklin, 2001). Some of the issues at the heart of this debate include the difficulty of quantifying teaching effectiveness, the impact of non-teacher related factors on student ratings (such as motivation, course level, class size, grades, etc.), and whether or not students are qualified to assess teaching effectiveness or accurately judge their learning. The majority of these issues are addressed in Theall and Franklin's 2001 article Looking for Bias in All the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction?, in which the authors break down common misconceptions and discuss the significance of the field of student evaluation studies.
Another common thread of apprehension is the purpose of student feedback and the potential impact of that feedback in performance reviews. Student evaluations can be either formative (used by the instructor for personal improvement and/or development) or summative (used by superiors for job evaluation or personnel decisions) and many instructors are troubled by the thought that student evaluations could potentially be the only means by which they are evaluated. Fortunately, it is typical for universities to utilize a variety of sources and types of data to evaluate teaching, most frequently relying on the input of students, colleagues, and administrators, as well as the teachers themselves (Paulsen, 2004). Considering the implications and significance of summative evaluation, and considering the fact that discussion continues about what measures should be included in the summative assessment of teaching effectiveness, it is essential that instructors make themselves aware of any departmental and university policies related to teaching evaluation. Franklin (2001) suggests supplementing evaluative data with an explanatory narrative: “Submitting a well-written, well-reasoned narrative discussing your students' evaluations of your teaching (that is, ratings) is an opportunity to improve the odds that your reviewers will consider your students' opinions in the full context of the complex factors that shaped them” (p. 85). This kind of narrative offers faculty members the opportunity to provide administrators with additional insight and reflection on elements such as the changes that have been or will be made in response to student feedback, thereby demonstrating an interest in achieving and maintaining optimal teaching evaluations.
For additional reading on the reliability and validity of student evaluations, please
take a look at the following articles:
Greenwald, A. G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52 (11) 1182-1186.
Theall, M., Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instruction?. New Directions for Institutional Research, 109, 45-56.
How to Approach or Administer Evaluations
For additional information on different forms of mid-semester student evaluations,
Lewis, K.G. (2001b). Using midsemester student feedback and responding to It. New Directions for Teaching and Learning, 87, 33-44.
Keeping in mind that student evaluations will most likely continue to be used for summative purposes, it is in the best interest of instructors at all levels to consider ways of optimizing the formative purpose of student evaluations, thereby working towards a goal of embracing student feedback as a tool rather than as a punishment or necessary evil. One way to avoid unexpected comments on end-of-semester student evaluations is to collect feedback throughout the semester and address students' complaints or concerns before it is too late to make or consider making any changes. Mid-semester evaluations are also great ways for instructors to demonstrate to students that their opinions are valued, thereby encouraging students to be more thoughtful and constructive in their feedback. As Svinicki (2001) notes, “students are not inclined to give extensive feedback because they believe it will have no effect on the ultimate target of teaching” (p. 18). Another benefit to gathering feedback throughout the semester is that the process of giving a productive and constructive evaluation is a learned skill that is unfamiliar to many students. Therefore, by engaging students in the evaluative process prior to end-of-semester evaluations that are more likely to be publicized or used for summative purposes, instructors are in a better position to effect positive change or demonstrate improvements in their teaching (Spence & Lenze, 2001; Svinicki, 2001).
There are many ways to collect student feedback ranging from informal personal conversations to formalized and directed surveys, and by administering personal teaching assessments throughout the semester, instructors are able to tailor the evaluation process to meet the needs of specific classes and to strengthen and enrich the overall process of their teaching evaluation (Lewis, 2001b; Light, Cox, & Calkins, 2009). One suggestion for collecting informal feedback without taking too much class time is to utilize classroom assessment techniques such as the Muddiest Point or one-minute paper, asking students to quickly write down what is helping (or not helping) them learn and what suggestions they have to improve the class (Lewis, 2001b). Something to consider in the administration of mid-semester evaluations is student anonymity and how the feedback process is approached. If students perceive that their grades will be threatened by giving honest and potentially negative feedback, they are less likely to participate. As Lewis (2001b) says, “Prepare your students…let them know what you are going to do and why you are asking for information. Their responses need to be as anonymous as possible, and you need to assure them that this is only to help you improve the learning environment” (p38).
The manner in which you prepare students for and administrate end-of-semester evaluations can also impact student feedback. To obtain more thoughtful responses to open-ended questions, Svinicki (2001) recommends telling students in advance what they will be asked to comment on so that they have time to think about the question and formulate an opinion.
It is very difficult to come up with coherent, thoughtful feedback with only five minutes' notice. Students will be able to provide much better information if the instructor tells them before class that he or she will be asking for their input at the following session. While it is naïve to think that all the students will take the opportunity to ruminate over their responses during that time, it is reasonable to think that enough of them will to make it worthwhile. Certainly nothing is lost as a result. (Svinicki, 2001, p.22)
Therefore by providing students with prompts to comment on, both for mid-semester and end-of-semester evaluations, instructors can help personalize what would otherwise be generic evaluations and increase the likelihood of receiving useful written feedback. Other suggestions (Centra, 1993; Center for Teaching and Learning, 1994; Franklin, 2001; Svinicki, 2001) for end-of-semester evaluations include:
- Giving students detailed instructions (including the assurance that feedback will not be reviewed until after grades have been entered),
- Appointing a proctor to oversee the evaluation so that the instructor is not present at the time of the evaluation,
- Allotting sufficient time for students to thoughtfully complete scaled and open-ended questions, and
- Administering the evaluation in advance of final examinations.
Throughout this process it is essential for instructors to sincerely communicate that
students' comments are valued and will be considered for the future.
Understanding and Responding to Student Evaluations
For more suggestions on categorizing student feedback, take a look at Lewis, K. G. (2001a). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.
Perhaps the most difficult aspect of the student evaluation process is understanding and responding to student written comments. As Lewis (2001a) notes, “They do not come to the instructor piled into a neat package that summarizes the positive and negative comments. Instead, they are usually read straight through from the top of the stack to the bottom, so they seem to be a series of random, unconnected statements about the teaching and the teacher. Under these circumstances, it is difficult for the human mind to make sense of the information” (p. 25). Sifting through responses that frequently encompass a wide range of statements from “This is the best teacher I've ever had,” to “This was the most boring class I've ever taken” is an arduous and difficult task, especially considering the personal nature of the comments. Lewis (2001a) suggests approaching students' written feedback as one would approach the qualitative data from a research study. The first step in this process is to sort comments into categories such as strengths/weaknesses, teaching components (organization, instructor/student interaction, lecture, PowerPoint, grading, etc), or, assuming the data has been reported in a manner in which student comments are listed with the corresponding course ratings, by course rating (Groccia, 1997; Lewis, 2001a). By sorting student comments, instructors are better able to identify patterns and commonalities, thereby providing a more structured and digestible format from which to consider the feedback. This can also help instructors avoid getting stuck on a single comment and can potentially give perspective to what might have otherwise been contradictory comments. After considering students' feedback, it is up to the instructor to determine whether or not to make changes to their teaching or organizational practices.
Another way to help make the most of student evaluations and consider potential changes is through the services of faculty development consultants. Universities across the country feature centers for teaching and learning which are designed to assist faculty with teaching related issues such as the interpretation of student evaluations and strategies for effective teaching. As Fresko and Nasser (2001) discuss, even if faculty are able to understand students' feedback and identify problems that need to be addressed, it is often more difficult to know how to target the problem and make necessary changes. Working with a faculty developer to identify trends in feedback and possible modifications can result in improvements in teaching evaluations as well as teacher satisfaction. The Teaching, Learning, and Professional Development Center (TLPD) offers graduate students, instructors, and faculty members at Texas Tech University with the opportunity to work with trained faculty developers through classroom observations, mid-term student evaluations, workshops, and other consultation services including the interpretation of student evaluations.
Interpreting and understanding student evaluations can be a difficult process, but
it is definitely worth the effort. Taking students' feedback into consideration and
utilizing it as a vehicle to guide reflection helps encourage teachers to constantly
reassess their teaching strategies and ensures that the lines of communication between
students and teachers remain open. By thoughtfully considering student feedback and
possibly implementing changes in the classroom, teachers empower themselves and demonstrate
to administrators and students that they value teaching effectiveness and strive for
excellence in the classroom.
The Center for Excellence in Learning and Teaching (CELT) discusses issues related to reliability and validity of student evaluation of teaching as well as best practices for the construction and implementation of student evaluations. In addition to providing a list of frequently asked questions regarding student evaluations of teaching, the CELT provides links to resources from other centers for teaching and learning.
Center for Research on Learning and Teaching, University of Michigan. (2011). Teaching Strategies: Evaluation of Teaching Effectiveness. Available online: http://www.crlt.umich.edu/tstrategies/tseot.php.
In this resource, the Center for Research on Learning and Teaching has compiled an annotated bibliography of online resources related to the evaluation of teaching effectiveness. Resources focus on topics such as peer review of teaching, student ratings and midterm feedback, and general overviews of teaching evaluations.
Barbara Gross Davis' text Tools for Teaching is a well renowned resource for best practices in college teaching and her chapter on Student Rating Forms touches on aspects of student evaluations ranging from the creation and administration of questionnaires to the summarization and interpretation of results. A reference list is also included.
Center for Teaching and Learning, University of North Carolina at Chapel Hill. (1994). Student Evaluation of Teaching. For Your Consideration…Suggestions and Reflections on Teaching and Learning, 16.
Centra, J.A. (1993). Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness. San Francisco: Jossey-Bass Publishers.
Franklin, J. (2001). Interpreting the Numbers: Using a Narrative to Help Others Read Student Evaluations of Your Teaching Accurately. New Directions for Teaching and Learning, 87, 85-100.
Gallagher, T. J. (2000). Embracing Student Evaluations of Teaching: A Case Study. Teaching Sociology, 28 (2), 140-147.
Greenwald, A. G. (1997). Validity Concerns and Usefulness of Student Ratings of Instruction. American Psychologist, 52 (11) 1182-1186.
Groccia, J. E. (1997). Understanding and Using Student Evaluations to Improve Your Teaching. Biggio Center for the Enhancement of Teaching and Learning, Auburn University, White Paper. Retrieved from: http://www.auburn.edu/academic/other/biggio/links/studentevaluations.pdf.
Lewis, K. G. (2001a). Making Sense of Student Written Comments. New Directions for Teaching and Learning, 87, 25-32.
Lewis, K.G. (2001b). Using Midsemester Student Feedback and Responding to It. New Directions for Teaching and Learning, 87, 33-44.
Light, G., Cox, R., & Calkins, S. (2009). Learning and Teaching in Higher Education: The Reflective Professional (2nd ed.). London: SAGE Publications Ltd.
Ory, J.C. (2001). Faculty Thoughts and Concerns About Student Ratings. New Directions for Teaching and Learning, 87, 3-16.
Paulsen, M.B. (2002). Evaluating Teaching Performance. New Directions for Institutional Research, 114, 5-18.
Spence L., Lenze L.F. (2001). Taking Student Criticism Seriously: Using Student Quality Teams to Guide Critical Reflection. New Directions for Teaching and Learning, 87, 45-54.
Svinicki, M.D. (2001). Encouraging Your Students to Give Feedback. New Directions for Teaching and Learning, 87,17-24
Theall, M., Franklin, J. (2001). Looking for Bias in All the Wrong Places: A Search for Truth or a Witch Hunt in Student Ratings of Instruction?. New Directions for Institutional Research, 109, 45-56.