Identifying typical approaches and errors in Prolog programming with argument-based machine learning

•Abstract-syntax-tree (AST) patterns as attributes for classifying Prolog programs.•Identification of AST patterns for detecting errors and programming approaches.•An argument-based algorithm for learning rules suitable for tutoring.•Evaluation of extracted patterns and rules on 42 Prolog exercises....

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 112; pp. 110 - 124
Main Authors Možina, Martin, Lazar, Timotej, Bratko, Ivan
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 01.12.2018
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Abstract-syntax-tree (AST) patterns as attributes for classifying Prolog programs.•Identification of AST patterns for detecting errors and programming approaches.•An argument-based algorithm for learning rules suitable for tutoring.•Evaluation of extracted patterns and rules on 42 Prolog exercises. Students learn programming much faster when they receive feedback. However, in programming courses with high student-teacher ratios, it is practically impossible to provide feedback to all homeworks submitted by students. In this paper, we propose a data-driven tool for semi-automatic identification of typical approaches and errors in student solutions. Having a list of frequent errors, a teacher can prepare common feedback to all students that explains the difficult concepts. We present the problem as supervised rule learning, where each rule corresponds to a specific approach or error. We use correct and incorrect submitted programs as the learning examples, where patterns in abstract syntax trees are used as attributes. As the space of all possible patterns is immense, we needed the help of experts to select relevant patterns. To elicit knowledge from the experts, we used the argument-based machine learning (ABML) method, in which an expert and ABML interactively exchange arguments until the model is good enough. We provide a step-by-step demonstration of the ABML process, present examples of ABML questions and corresponding expert’s answers, and interpret some of the induced rules. The evaluation on 42 Prolog exercises further shows the usefulness of the knowledge elicitation process, as the models constructed using ABML achieve significantly better accuracy than the models learned from human-defined patterns or from automatically extracted patterns.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2018.06.029