Exploring new depths: Applying machine learning for the analysis of student argumentation in chemistry

Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing...

Full description

Saved in:
Bibliographic Details
Published inJournal of research in science teaching Vol. 61; no. 8; pp. 1757 - 1792
Main Authors Martin, Paul P., Kranz, David, Wulff, Peter, Graulich, Nicole
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.10.2024
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing so, students face significant challenges in coherently structuring their arguments and integrating chemical concepts. For this reason, a reliable assessment of students' argumentation is critical. However, as arguments are usually presented in open‐ended tasks, scoring assessments manually is resource‐consuming and conceptually difficult. To augment human diagnostic capabilities, artificial intelligence techniques such as machine learning or natural language processing offer novel possibilities for an in‐depth analysis of students' argumentation. In this study, we extensively evaluated students' written arguments about the plausibility of competing chemical reactions based on a methodological approach called computational grounded theory. By using an unsupervised clustering technique, we sought to evaluate students' argumentation patterns in detail, providing new insights into the modes of reasoning and levels of granularity applied in students' written accounts. Based on this analysis, we developed a holistic 20‐category rubric by combining the data‐driven clusters with a theory‐driven framework to automate the analysis of the identified argumentation patterns. Pre‐trained large language models in conjunction with deep neural networks provided almost perfect machine‐human score agreement and well‐interpretable results, which underpins the potential of the applied state‐of‐the‐art deep learning techniques in analyzing students' argument complexity. The findings demonstrate an approach to combining human and computer‐based analysis in uncovering written argumentation.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0022-4308
1098-2736
DOI:10.1002/tea.21903