Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Assessing student perceptions and use of instructor versus AI-generated feedback
Date
2024-12-27
Author
Er, Erkan
AKÇAPINAR, GÖKHAN
BAYAZIT, ALPER
Noroozi, Omid
Banihashem, Seyyed Kazem
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
26
views
0
downloads
Cite This
Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI-generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI-generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices.Practitioner notes What is already known about this topic Feedback is crucial for student learning in programming education. Providing detailed personalised feedback is challenging for instructors. AI-powered solutions like ChatGPT can be effective in feedback provision. Existing research is limited and shows mixed results about AI-generated feedback. What this paper adds The effectiveness of AI-generated feedback was compared to instructor feedback. Both feedback types received positive perceptions, but instructor feedback was seen as more useful. Instructor feedback led to greater score improvements in the programming task. Implications for practice and/or policy AI should not be the sole source of feedback, as human expertise is crucial. AI models should be trained on context-specific data to improve feedback actionability. Hybrid feedback models should be considered for a scalable and effective approach.
URI
https://hdl.handle.net/11511/113605
Journal
BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY
DOI
https://doi.org/10.1111/bjet.13558
Collections
Graduate School of Informatics, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Er, G. AKÇAPINAR, A. BAYAZIT, O. Noroozi, and S. K. Banihashem, “Assessing student perceptions and use of instructor versus AI-generated feedback,”
BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY
, pp. 0–0, 2024, Accessed: 00, 2025. [Online]. Available: https://hdl.handle.net/11511/113605.