GPT-3-Driven Pedagogical Agents to Train Children’s Curious Question-Asking Skills
The ability of children to ask curiosity-driven questions is an important skill that helps improve their learning. For this reason, previous research has explored designing specific exercises to train this skill. Several of these studies relied on providing semantic and linguistic cues to train them...
Saved in:
Published in | International journal of artificial intelligence in education Vol. 34; no. 2; pp. 483 - 518 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer New York
01.06.2024
Springer Nature B.V Springer |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The ability of children to ask curiosity-driven questions is an important skill that helps improve their learning. For this reason, previous research has explored designing specific exercises to train this skill. Several of these studies relied on providing semantic and linguistic cues to train them to ask more of such questions (also called
divergent questions
). But despite showing pedagogical efficiency, this method is still limited as it relies on generating the said cues by hand, which can be a very long and costly process. In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a large language model (LLM) for automating the production of key parts of pedagogical content within a curious question-asking (QA) training. We study generating the said content using the "prompt-based" method that consists of explaining the task to the LLM in natural text. We evaluate the output using human experts annotations and comparisons with hand-generated content. Results suggested indeed the relevance and usefulness of this content. We then conduct a field study in primary school (75 children aged 9–10), where we evaluate children’s QA performance when having this training. We compare 3 types of content: 1) hand-generated content that proposes "closed" cues leading to predefined questions; 2) GPT-3-generated content that proposes the same type of cues; 3) GPT-3-generated content that proposes "open" cues leading to several possible questions. Children were assigned to either one of these groups. Based on human annotations of the questions generated, we see a similar QA performance between the two "closed" trainings (showing the scalability of the approach using GPT-3), and a better one for participants with the "open" training. These results suggest the efficiency of using LLMs to support children in generating more curious questions, using a natural language prompting approach that affords usability by teachers and other users not specialists of AI techniques. Furthermore, results also show that open-ended content may be more suitable for training curious question-asking skills. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1560-4292 1560-4306 |
DOI: | 10.1007/s40593-023-00340-7 |