Enhancing Automated Grading with Capabilities of LLMs: Using Prompt Engineering and RAG Techniques
This research explores the potential of Large Language Models (LLMs) to automate the grading process in education by harnessing LLMs' sophisticated understanding of language and instructions following nature. We explore the effectiveness of providing subject knowledge and utilizing prompt engin...
Saved in:
Published in | 2025 5th International Conference on Advanced Research in Computing (ICARC) pp. 1 - 6 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
19.02.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This research explores the potential of Large Language Models (LLMs) to automate the grading process in education by harnessing LLMs' sophisticated understanding of language and instructions following nature. We explore the effectiveness of providing subject knowledge and utilizing prompt engineering techniques to grade the students' written answers for different question types and various theoretical subjects. A grading rubric was employed to ensure consistency and fairness in the assessment process. The study results highlighted the importance of providing external knowledge within the prompt to enhance the automated student answer grading utilizing LLMs. Including grading rubrics, model answers, and course content significantly enhanced the accuracy of scores assigned by the LLM, reducing deviations from human evaluator scores. Providing course content or model answers also helped define the expected answer scope and guide the LLM in determining other possible correct answers. Using prompt engineering techniques within the prompt failed to outperform the basic prompt, suggesting the need for further exploration and refinement in prompt design strategies. |
---|---|
DOI: | 10.1109/ICARC64760.2025.10962827 |