Enhancing questioning skills through child avatar chatbot training with feedback

Training child investigative interviewing skills is a specialized task. Those being trained need opportunities to practice their skills in realistic settings and receive immediate feedback. A key step in ensuring the availability of such opportunities is to develop a dynamic, conversational avatar,...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in psychology Vol. 14; p. 1198235
Main Authors Røed, Ragnhild Klingenberg, Baugerud, Gunn Astrid, Hassan, Syed Zohaib, Sabet, Saeed S, Salehi, Pegah, Powell, Martine B, Riegler, Michael A, Halvorsen, Pål, Johnson, Miriam S
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media 2023
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Training child investigative interviewing skills is a specialized task. Those being trained need opportunities to practice their skills in realistic settings and receive immediate feedback. A key step in ensuring the availability of such opportunities is to develop a dynamic, conversational avatar, using artificial intelligence (AI) technology that can provide implicit and explicit feedback to trainees. In the iterative process, use of a chatbot avatar to test the language and conversation model is crucial. The model is fine-tuned with interview data and realistic scenarios. This study used a pre-post training design to assess the learning effects on questioning skills across four child interview sessions that involved training with a child avatar chatbot fine-tuned with interview data and realistic scenarios. Thirty university students from the areas of child welfare, social work, and psychology were divided into two groups; one group received direct feedback (  = 12), whereas the other received no feedback (  = 18). An automatic coding function in the language model identified the question types. Information on question types was provided as feedback in the direct feedback group only. The scenario included a 6-year-old girl being interviewed about alleged physical abuse. After the first interview session (baseline), all participants watched a video lecture on memory, witness psychology, and questioning before they conducted two additional interview sessions and completed a post-experience survey. One week later, they conducted a fourth interview and completed another post-experience survey. All chatbot transcripts were coded for interview quality. The language model's automatic feedback function was found to be highly reliable in classifying question types, reflecting the substantial agreement among the raters [Cohen's kappa (κ) = 0.80] in coding open-ended, cued recall, and closed questions. Participants who received direct feedback showed a significantly higher improvement in open-ended questioning than those in the non-feedback group, with a significant increase in the number of open-ended questions used between the baseline and each of the other three chat sessions. This study demonstrates that child avatar chatbot training improves interview quality with regard to recommended questioning, especially when combined with direct feedback on questioning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Frontiers in Psychology
Edited by: Pekka Santtila, New York University Shanghai, China
Reviewed by: Che-Wei Hsu, University of Otago, New Zealand; Shumpei Haginoya, Meiji Gakuin University, Japan
ISSN:1664-1078
1664-1078
DOI:10.3389/fpsyg.2023.1198235