ERIC Number: EJ1470146
Record Type: Journal
Publication Date: 2024
Pages: 9
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: EISSN-2051-3615
Available Date: 0000-00-00
Teaching and Assessing at Scale: The Use of Objective Rubrics and Structured Feedback
Simon Grey; Neil Gordon
New Directions in the Teaching of Natural Sciences, v19 n1 2024
It is widely recognised that feedback is an important part of learning: effective feedback should result in a meaningful change in student behaviour (Morris et al., 2021). However, individual feedback takes time to produce, and for large cohorts -- typified by the North of 300 challenge in computing (CPHC, 2019), it can be difficult to do so in a timely manner. On occasion it seems that many academics lose sight of the purpose of feedback, and instead view it to justify a mark, rather than an opportunity to provide meaningful tuition. One strategy to provide feedback at scale is to share the workload across multiple staff, but this introduces an additional problem in ensuring that the feedback and marking are equitable and consistent. In this paper we present a case study from teaching programming that attempts to address two distinct, but related issues. The first issue is to make feedback more meaningful. We attempt to achieve this by providing detailed feedback on a draft submission of programming coursework allowing students time to make changes to their work prior to the final submission date. We present an analysis of the data generated from this approach, and its potential impact on student behaviour. The second issue is that of scalability. This feedforward approach creates a significant pressure on marking and on the necessity to provide feedback on a draft submission to large numbers of students in good time so that students can act upon it. To achieve this, we consider an approach based on creating an objective, reusable marking rubric so that the work can be reasonably spread across multiple members of staff. We present an analysis of the data generated from this approach to determine whether we consider the rubric to be objective enough to remove individual interpretations and biases, and where discrepancies exist attempt to determine where those discrepancies arise. This work was carried out through an analysis of impact on student assessment, as well as from the academic staff involved in using the rubrics. Preliminary results from this work show that the more objective rubric used by several did enable a scalable solution for rapid feedback on submissions, and this did indicate some improvement in student outcomes. However, the work also illustrated the problems of subjective interpretations and some variation in outcomes by marker.
Descriptors: Student Evaluation, Evaluation Methods, Scoring Rubrics, Feedback (Response), Error Correction, Student Behavior, Scaling
University of Leicester Open Journals. University of Leicester Library, University Road, Leicester LE1 7RH, UK. Tel: +44-116-252-2043; e-mail: openaccess@le.ac.uk; Web site: https://journals.le.ac.uk/index.php/new-directions
Publication Type: Journal Articles; Reports - Descriptive
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A