Real Customization or Just Marketing: Are Customized Versions of Generative AI Useful? [version 3; peer review: 3 approved, 1 approved with reservations]

Abstract Background Large Language Models (LLMs), as in the case of OpenAI TM ChatGPT-4 TM Turbo, are revolutionizing several industries, including higher education. In this context, LLMs can be personalised through customization process to meet the student demands on every particular subject, like...

Full description

Saved in:
Bibliographic Details
Published inF1000 research Vol. 13; p. 791
Main Authors Garrido-Merchán, Eduardo C., Arroyo-Barrigüete, Jose Luis, Borrás-Pala, Francisco, Escobar-Torres, Leandro, Martínez de Ibarreta, Carlos, Ortíz-Lozano, Jose María, Rua-Vieites, Antonio
Format Journal Article
LanguageEnglish
Published England Faculty of 1000 Ltd 2024
F1000 Research Ltd
Subjects
Online AccessGet full text
ISSN2046-1402
2046-1402
DOI10.12688/f1000research.153129.3

Cover

Loading…
More Information
Summary:Abstract Background Large Language Models (LLMs), as in the case of OpenAI TM ChatGPT-4 TM Turbo, are revolutionizing several industries, including higher education. In this context, LLMs can be personalised through customization process to meet the student demands on every particular subject, like statistics. Recently, OpenAI launched the possibility of customizing their model with a natural language web interface, enabling the creation of customised GPT versions deliberately conditioned to meet the demands of a specific task. Methods This preliminary research aims to assess the potential of the customised GPTs. After developing a Business Statistics Virtual Professor (BSVP), tailored for students at the Universidad Pontificia Comillas, its behaviour was evaluated and compared with that of ChatGPT-4 Turbo. Firstly, each professor collected 15-30 genuine student questions from "Statistics and Probability" and "Business Statistics" courses across seven degrees, primarily from second-year courses. These questions, often ambiguous and imprecise, were posed to ChatGPT-4 Turbo and BSVP, with their initial responses recorded without follow-ups. In the third stage, professors blindly evaluated the responses on a 0-10 scale, considering quality, depth, and personalization. Finally, a statistical comparison of the systems' performance was conducted. Results The results lead to several conclusions. Firstly, a substantial modification in the style of communication was observed. Following the instructions it was trained with, BSVP responded in a more relatable and friendly tone, even incorporating a few minor jokes. Secondly, when explicitly asked for something like, "I would like to practice a programming exercise similar to those in R practice 4," BSVP could provide a far superior response. Lastly, regarding overall performance, quality, depth, and alignment with the specific content of the course, no statistically significant differences were observed in the responses between BSVP and ChatGPT-4 Turbo. Conclusions It appears that customised assistants trained with prompts present advantages as virtual aids for students, yet they do not constitute a substantial improvement over ChatGPT-4 Turbo.
Bibliography:new_version
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2046-1402
2046-1402
DOI:10.12688/f1000research.153129.3