Exploring the Design Space of Cognitive Engagement Techniques with AI-Generated Code for Enhanced Learning
Novice programmers are increasingly relying on Large Language Models (LLMs) to generate code for learning programming concepts. However, this interaction can lead to superficial engagement, giving learners an illusion of learning and hindering skill development. To address this issue, we conducted a...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Novice programmers are increasingly relying on Large Language Models (LLMs)
to generate code for learning programming concepts. However, this interaction
can lead to superficial engagement, giving learners an illusion of learning and
hindering skill development. To address this issue, we conducted a systematic
design exploration to develop seven cognitive engagement techniques aimed at
promoting deeper engagement with AI-generated code. In this paper, we describe
our design process, the initial seven techniques and results from a
between-subjects study (N=82). We then iteratively refined the top techniques
and further evaluated them through a within-subjects study (N=42). We evaluate
the friction each technique introduces, their effectiveness in helping learners
apply concepts to isomorphic tasks without AI assistance, and their success in
aligning learners' perceived and actual coding abilities. Ultimately, our
results highlight the most effective technique: guiding learners through the
step-by-step problem-solving process, where they engage in an interactive
dialog with the AI, prompting what needs to be done at each stage before the
corresponding code is revealed. |
---|---|
DOI: | 10.48550/arxiv.2410.08922 |