ChatGPT in medical imaging higher education

Academic integrity among radiographers and nuclear medicine technologists/scientists in both higher education and scientific writing has been challenged by advances in artificial intelligence (AI). The recent release of ChatGPT, a chatbot powered by GPT-3.5 capable of producing accurate and human-li...

Full description

Saved in:
Bibliographic Details
Published inRadiography (London, England. 1995) Vol. 29; no. 4; pp. 792 - 799
Main Authors Currie, G., Singh, C., Nelson, T., Nabasenja, C., Al-Hayek, Y., Spuur, K.
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier Ltd 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Academic integrity among radiographers and nuclear medicine technologists/scientists in both higher education and scientific writing has been challenged by advances in artificial intelligence (AI). The recent release of ChatGPT, a chatbot powered by GPT-3.5 capable of producing accurate and human-like responses to questions in real-time, has redefined the boundaries of academic and scientific writing. These boundaries require objective evaluation. ChatGPT was tested against six subjects across the first three years of the medical radiation science undergraduate course for both exams (n = 6) and written assignment tasks (n = 3). ChatGPT submissions were marked against standardised rubrics and results compared to student cohorts. Submissions were also evaluated by Turnitin for similarity and AI scores. ChatGPT powered by GPT-3.5 performed below the average student performance in all written tasks with an increasing disparity as subjects advanced. ChatGPT performed better than the average student in foundation or general subject examinations where shallow responses meet learning outcomes. For discipline specific subjects, ChatGPT lacked the depth, breadth, and currency of insight to provide pass level answers. ChatGPT simultaneously poses a risk to academic integrity in writing and assessment while affording a tool for enhanced learning environments. These risks and benefits are likely to be restricted to learning outcomes of lower taxonomies. Both risks and benefits are likely to be constrained by higher order taxonomies. ChatGPT powered by GPT3.5 has limited capacity to support student cheating, introduces errors and fabricated information, and is readily identified by software as AI generated. Lack of depth of insight and appropriateness for professional communication also limits capacity as a learning enhancement tool.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1078-8174
1532-2831
1532-2831
DOI:10.1016/j.radi.2023.05.011