Teaching and Assessing at scale: the use of objective rubrics and structured feedback

Authors

DOI:

https://doi.org/10.29311/ndtns.vi19.4103

Keywords:

Computing Education, Feedback, Teaching Programming.

Abstract

It is widely recognised that feedback is an important part of learning: effective feedback should result in a meaningful change in student behaviour (Morris et al, 2021).  However, individual feedback takes time to produce, and for large cohorts – typified by the North of 300 challenge in computing (CPHC, 2019) - it can be difficult to do so in a timely manner. On occasion it seems that many academics lose sight of the purpose of feedback, and instead view it to justify a mark, rather than an opportunity to provide meaningful tuition. One strategy to provide feedback at scale is to share the workload across multiple staff, but this introduces an additional problem in ensuring that the feedback and marking are equitable and consistent. In this paper we present a case study from teaching programming that attempts to address two distinct, but related issues.

The first issue is to make feedback more meaningful. We attempt to achieve this by providing detailed feedback on a draft submission of programming coursework allowing students time to make changes to their work prior to the final submission date. We present an analysis of the data generated from this approach, and its potential impact on student behaviour.

The second issue is that of scalability. This feedforward approach creates a significant pressure on marking and on the necessity to provide feedback on a draft submission to large numbers of students in good time so that students are able to act upon it. To achieve this we consider an approach based on creating an objective, reusable marking rubric so that the work can be reasonably spread across multiple members of staff. We present an analysis of the data generated from this approach to determine whether we consider the rubric to be objective enough to remove individual interpretations and biases, and where discrepancies exist attempt to determine where those discrepancies arise.

This work was carried out through an analysis of impact on student assessment, as well as from the academic staff involved in using the rubrics. Preliminary results from this work show that the more objective rubric used by several did enable a scalable solution for rapid feedback on submissions, and this did indicate some improvement in student outcomes. However, the work also illustrated the problems of subjective interpretations and some variation in outcomes by marker

Author Biographies

Simon J Grey, University of Hull

Department of Computer Science and Technology

Lecturer in Computer Science

Neil Andrew Gordon, University of Hull

Department of Computer Science and Technology

Reader in Computer Science

References

Ahoniemi, T. and Karavirta, V., 2009. Analyzing the use of a rubric-based grading tool. ACM SIGCSE Bulletin, 41(3), pp.333-337. https://doi.org/10.1145/1595496.1562977

Becker, K., 2003, June. Grading programming assignments using rubrics. In Proceedings of the 8th annual conference on Innovation and technology in computer science education (pp. 253-253).

Burgess, G.A. and Hanshaw, C., 2006. Application of learning styles and approaches in computing sciences classes. Journal of Computing Sciences in Colleges, 21(3), pp.60-68.

CPHC (Council of Professors and Heads of Computing), 2019. North of 300: Dealing with Significant Growth. https://cphc.ac.uk/2019/01/08/north-of-300-dealing-with-significant-growth/

Csikszentmihalyi, M., 2014. Toward a psychology of optimal experience. In Flow and the foundations of positive psychology (pp. 209-226). Springer, Dordrecht. https://doi.org/10.1007/978-94-017-9088-8_14

Dweck, C., 2016. What having a “growth mindset” actually means. Harvard Business Review, 13, pp.213-226.

Ericsson, K.A., Krampe, R.T. and Tesch-Römer, C., 1993. The role of deliberate practice in the acquisition of expert performance. Psychological review, 100(3), p.363.

Gordon, N.A., 2016. Issues in retention and attainment in Computer Science. York: Higher Education Academy.

Hattie, J. and Timperley, H., 2007. The power of feedback. Review of educational research, 77(1), pp.81-112. https://doi.org/10.3102/003465430298487

MacKay, J.R., Hughes, K., Marzetti, H., Lent, N. and Rhind, S.M., 2019. Using National Student Survey (NSS) qualitative data and social identity theory to explore students’ experiences of assessment and feedback. Higher Education Pedagogies, 4(1), pp.315-330. https://doi.org/10.1080/23752696.2019.1601500

Morris, R., Perry, T. and Wardle, L., 2021. Formative assessment and feedback for learning in higher education: A systematic review. Review of Education, 9(3), p.e3292. https://doi.org/10.1002/rev3.3292

Pembridge, J.J. and Rodgers, K.J., 2018, October. Examining self-efficacy and growth mindset in an introductory computing course. In 2018 IEEE Frontiers in Education Conference (FIE) (pp. 1-5). IEEE. https://doi.org/10.1109/FIE.2018.8658728

Downloads

Published

18-12-2024

How to Cite

Grey, S. J., & Gordon, N. A. (2024). Teaching and Assessing at scale: the use of objective rubrics and structured feedback. New Directions in the Teaching of Natural Sciences, (19). https://doi.org/10.29311/ndtns.vi19.4103