From Understanding Learning Difficulties Among Students To Providing High-Quality Automated Feedback

Loading...
Thumbnail Image

Date

2024-09-25

Advisor

Ward, Paul

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Students face various difficulties during their learning journeys. However, providing timely feedback often poses a challenge for educators due to availability constraints. Fortunately, automated feedback systems have been introduced, offering invaluable assistance. To equip instructors with a general understanding of students in their teaching activities in computing education, we conducted an analysis of students' learning analytics to gain insights. In this study, we applied clustering techniques to behavior data naturally collected within an automated feedback system. We discovered that although students spent a significant amount of time using the system, the learning outcomes were often limited. A predictive model was derived based on these observations. To assist students in their learning, we explored whether offering trivial-penalty time extensions could be beneficial and why students use them. Implementing flexible late policies was straightforward and placed minimal burden on instructors. We analyzed a fourth-year course that utilized flexible late policies and found that time conflicts and underestimation of coursework were the top two reasons for utilizing time extensions. In addition, our findings revealed a correlation between students' abilities and their usage of time extensions. This latter result was re-examined in a replication study and a reproduction study. While the automated feedback system was not initially considered in the main study, in the reproduction study, we found that even with time extensions and automated feedback systems, low/middle-performing students still could not match the performance of high-performing students. This suggests a fundamental issue: feedback from automated feedback systems may not be as effective as anticipated, which plays an essential role in assisting students' learning at scale. Consequently, the critical question arises: how to provide effective feedback from automated feedback systems. We identified two main issues in current automated feedback systems: incorrect components marked as correct and correct components marked as incorrect. To address these issues, we argue that the unit testing philosophy, widely adopted in the software industry, should not be naively applied to automated feedback systems in an educational context. We completely redesigned the procedure and proposed a novel guideline for composing automated assessments. Following this guideline, we developed an automated assessment for an entity-relationship question in a database course. Our evaluation showed that students had significantly improved their understanding of the topic.

Description

Keywords

automated feedback, automated assessment, learning analytics, computing education, time extension, student, learning

LC Subject Headings

Citation