On a lovely sunny day I headed to the leafy campus of the University of Sussex to attend their Teaching and Learning Conference.
Naomi Winstone from the University of Surrey gave the opening keynote on feedback. Feedback may be the most important enhancement to students’ learning. But… only if it’s acted on.
The “old paradigm” is one way delivery of comments, using strategies such as the “feedback sandwich” where negative feedback is served between positive pieces. The process is done when the feedback is uploaded to the VLE, and doesn’t consider further steps with student engagement. This leads to a focus on the technical side of feedback – how it’s presented and delivered, and tinkering with feedback sheets or similar issues rather than addressing the actual aim.
Students get frustrated if they don’t have opportunities to put their feedback into practice, whether due to timing of assessment or the design of the coursework. This may be where their dissatisfaction in surveys such as the NSS comes from. Likewise, academics feel they put in great effort to provide detailed feedback, which is then out of their control. Students may not engage or even open their feedback. As a result of these opposing interests, feedback can become something of a “blame game”.
The “new paradigm” is feedback as dialogue. The emphasis is shifted away from the academic, and towards what the students does with the feedback and how it affects their actions. So why can this be so difficult? We need to develop these four areas:
- Awareness of what the feedback means and its importance
- Cognisance of strategies by which the feedback could be implemented
- Agency to implement strategies (for example, restructure assessment so it’s not all at the end of the module and students have no opportunity to use feedback)
- Volition to scrutinise feedback and to implement those strategies
One way forward is to reframe to students how they see their role in feedback. Do they see themselves as passive recipients, or do they see themselves as responsible for drawing down from all the available resources? Another is to ensure that students learn the necessary skills to receive feedback, such as self appraisal, assessment literacy, goal setting and motivation.
Some useful resources included a toolkit for developing engagement with feedback developed with the Higher Education Agency. The University of Surrey also developed FEATS, a VLE tool to help students to track and synthesise their feedback from different modules.
Professor Peter Thomas from the Mathematics and Physical Sciences faculty presented on his use of peer grading. Students had a score that was calculated on various factors, including the quality of their feedback, their responses and more. This was part of their module grade as the module was aimed towards developing general skills for physics. They had to upload their answers, and then grade three other students according to a rubric/survey and a copy of the model answers. PeerGrade have a booklet which explains how to develop a good rubric.
The use of learning technologies to create classroom dialogues was presented by Dr Joanna Richardson from Life Sciences. The context was two large modules (230 and 130 students) where students were too intimidated to ask questions, or indeed to respond when they were asked questions in the lecture. The idea was the “muddiest point”, which asks students to reflect upon what they found most difficult in a lecture. They gave feedback via Poll Everywhere. Some were multiple choice questions to quickly check understanding, but also the Q&A type which allows students to type in answers which other students can upvote or downvote. This allows the lecture to focus on the most pertinent points. The first time, responses were a bit too broad (focusing on exams) so the questions were made slightly more focused for future polls. The positive results were that students became more emboldened to raise their hands and ask questions, and to respond when questions were put to them. Another technique was using Padlet to engage students with case studies. Students would be able to anonymously post answers or questions on the Padlet. Engagement seemed to rise when the answers were gone through in class. Finally online quizzes were used. These were online MCQs, with formative ones that included feedback for overall grade and every question. This led to significant improvement in students’ performance on the summative tests.
Tab Betts and the TEL team discussed enhancing feedback with Canvas, the new VLE to be launched at the university. We broke into groups to discuss ideas and ask questions about three methods for feedback: quizzes, peer assessment and collaboration. The latter was using either a tool in Canvas, or Office365. In practice it’s common to see quizzes, but the other methods raised more questions about how to manage the delivery and monitor the quality of students’ work and commenting.
Dr Wendy Garnham presented on the influence of mode of feedback on its perceived value. They used Zoom to deliver video and audio feedback on two out of three assessments, with traditional written feedback on the other assessment. Students felt the video and audio feedback was more personal, so they liked it more, though they did find it harder to hear negative feedback compared to reading it. However they didn’t see a significant difference in the usefulness of each mode. Three students joined the presentation, two of whom confirmed that they preferred the video feedback and one who preferred written, so it remains an area where more research is needed.
After the final panel discussion, my main takeaway was that if the feedback is useful, then issues such as engagement, turnaround time and anonymity take care of themselves. There is a need to change the focus of assignments so that feedback is more central to the learning.