Evaluating the Performance of Large Language Models (LLMs) in Answering and Analysing the Chinese Dental Licensing Examination
ABSTRACT Background This study aimed to simulate diverse scenarios of students employing LLMs for CDLE examination preparation, providing a detailed evaluation of their performance in medical education. Methods A stratified random sampling strategy was implemented to select and subsequently revise 2...
Saved in:
Published in | European journal of dental education Vol. 29; no. 2; pp. 332 - 340 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
England
Blackwell Publishing Ltd
01.05.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ABSTRACT
Background
This study aimed to simulate diverse scenarios of students employing LLMs for CDLE examination preparation, providing a detailed evaluation of their performance in medical education.
Methods
A stratified random sampling strategy was implemented to select and subsequently revise 200 questions from the CDLE. Seven LLMs, recognised for their exceptional performance in the Chinese domain, were selected as test subjects. Three distinct testing scenarios were constructed: answering questions, explaining questions and adversarial testing. The evaluation metrics included accuracy, agreement rate and teaching effectiveness score. Wald χ2 tests and Kruskal–Wallis tests were employed to determine whether the differences among the LLMs across various scenarios and before and after adversarial testing were statistically significant.
Results
The majority of the tested LLMs met the passing threshold on the CDLE benchmark, with Doubao‐pro 32k and Qwen2‐72b (81%) achieving the highest accuracy rates. Doubao‐pro 32k demonstrated the highest 98% agreement rate with the reference answers when providing explanations. Although statistically significant differences existed among various LLMs in their teaching effectiveness scores based on the Likert scale, all these models demonstrated a commendable ability to deliver comprehensible and effective instructional content. In adversarial testing, GPT‐4 exhibited the smallest decline in accuracy (2%, p = 0.623), while ChatGLM‐4 demonstrated the least reduction in agreement rate (14.6%, p = 0.001).
Conclusions
LLMs trained on Chinese corpora, such as Doubao‐pro 32k, demonstrated superior performance compared to GPT‐4 in answering and explaining questions, with no statistically significant difference. However, during adversarial testing, all models exhibited diminished performance, with GPT‐4 displaying comparatively greater robustness. Future research should further investigate the interpretability of LLM outputs and develop strategies to mitigate hallucinations generated in medical education. |
---|---|
Bibliography: | Yu‐Tao Xiong and Zheng‐Zhe Zhan contributed equally to this work. Funding (1) Sichuan Science and Technology Program (grant number: 2024NSFSC0659, 2023YFG0272); (2) Research and Development Program, West China Hospital of Stomatology Sichuan University (grant number: RD‐03‐202303); and (3) National Natural Science Foundation of China (grant number: 62376176). ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 1396-5883 1600-0579 1600-0579 |
DOI: | 10.1111/eje.13073 |