I have been asked to review and maybe redesign the form we use to get Module Feedback from our students. The reason for moving this issue from the perennial “Yeah, wouldn’t it be nice to have a better form” to something to be actioned and delivered is that we academics compared notes and noticed that we all get students who say in words that it’s “a great module” but then tick the low scores on the formal scales. It is obvious that something is wrong with the forms as they are.
My first thought is that the Likert Scale (5 points from Strongly Agree (SA) to Strongly Disagree (SD)) is just not the right approach to getting an accurate response. Moreover, our forms in particular have SA to left and SD to right, and associated strongly with numerical scores from 0 to 5, respectively. It seems so counter-intuitive to associate a *high* satisfaction with a poor score! A better strategy is to associate score from -2 to +2 where the negative end -2 is associated with SD and the positive end +2 with SA. So far so good, and this is easy to incorporate.
But since we are looking at the Likert scale… I really wonder if we should have a mid-point, a “neither agree nor disagree”, or in other words, “I can’t be bothered”. Surely every student will have *some* opinion, whether positive or negative if they have engaged at all with the teaching? Unless they have reached a Zen-like point of equanimity, but somehow I doubt it. A mid-point seems like a cop-out, some way for students to avoid saying anything at all — in this case they really should be allowed to tick an “I prefer not to say” button, out of the normal scale. And another thing: how balanced should a scale be? Would we be immorally influencing the students by having more nuanced levels of satisfaction than dissatisfaction? My naive perspective is that most students are generally satisfied with modules, or if they have serious issues with how it is taught, surely they would take action before reaching the point close to the end of the module when feedback is requested? In which case, we should be looking for information on how to make a good experience even better.
In fact, do we need focus so much on questions that are answered by a tick mark on a scale? My colleagues and I find the textual answers where students put in their own words their opinions and suggestions to be much more useful than a simple SA/A/0/D/SD answer to questions such as “The lectures were well organised”. However, though we give the students 10-15 minutes to fill the forms, very few actually write something meaningful. I wonder if this is a mostly CS problem or if this happens in other disciplines too. Maybe it is difficult to articulate their thoughts spontaneously, particularly as many would be keen to not unduly offend any of the lecturers. Here we certainly need skill to phrase questions properly, to encourage students to respond honestly and clearly.
This review also raises an important point: what is the purpose of module evaluation by students, after all? An what should happen after the evaluations are sought? We tell students that it will “help improve the quality of modules” – but in what way? What is the follow-up? Some departments publish the results of module surveys and formal responses by the instructors and departments, so this is certainly a possibility. But there is the unfortunate use of evaluations as an instrument to compare and assess performance of instructors, which may not always be a very kind or effective way to improve teaching. It can be a very aggressive world out there, as the Rate My Professors site demonstrates. If there were some assurance that the results are accurate and objective and qualitative, there could be a good justification for publication, but when the evaluation is done, as often as not, by scores which can be ticked with little thought or, dare I say, responsibility, the dangers of misinterpretation are high. In fact, I believe that if evaluations are used as direct judgments on the instructors individually, whether to reward or punish them formally, or even to informally set up a popularity contest, there is the real risk that this will compromise their use as an aid to improving teaching.
Finally, there are many, many publications which show that students, like the population at large, are influenced by a wide range of unconscious biases, which in particular show that female lecturers, and lecturers from ethnic minorities, receive more negative feedback, even when normalized over the learning outcomes and exam results. There is an old saying: Best Way for Professors to Get Good Student Evaluations? Be Male.
I would really like to hear thoughts from students and colleagues from here and other institutions. I will be designing the new feedback forms in time for the new academic year, but hopefully will test it out over the next few months.
Edited to add: I found this report very interesting and useful: Collecting and using student feedback: a guide to good practice.