Comparative Analysis of Accuracy, Readability, Sentiment, and Actionability: Artificial Intelligence Chatbots (ChatGPT and Google Gemini) versus Traditional Patient Information Leaflets for Local Anesthesia in Eye Surgery
Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying...
Saved in:
Published in | British and Irish orthoptic journal Vol. 20; no. 1; pp. 183 - 192 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
England
Ubiquity Press Ltd
2024
Ubiquity Press White Rose University Press |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery.
Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources.
Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05).
AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. |
---|---|
AbstractList | Background and Aim: Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery. Methods: Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists’ PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources. Results: Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05). Conclusion: AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery. Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources. Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean [+ -] standard deviation) (4.71 [+ -] 0.5 vs 4.61 [+ -] 0.62) and completeness (4.55 [+ -] 0.58 vs 4.47 [+ -] 0.58) compared to Google Gemini, but PILs performed best (4.84 [+ -] 0.37 vs 4.88 [+ -] 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05). AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. Background and Aim: Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery. Methods: Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources. Results: Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean [+ -] standard deviation) (4.71 [+ -] 0.5 vs 4.61 [+ -] 0.62) and completeness (4.55 [+ -] 0.58 vs 4.47 [+ -] 0.58) compared to Google Gemini, but PILs performed best (4.84 [+ -] 0.37 vs 4.88 [+ -] 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05). Conclusion: AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. Keywords: AI Artificial Intelligence, Cataract, Local anesthetic, Patient education handout, Readability Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery. Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources. Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05). AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery.Background and AimEye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and empowering individuals. Traditional sources of medical information may not effectively address individual patient concerns or cater to varying levels of understanding. This study aims to conduct a comparative analysis of the accuracy, completeness, readability, tone, and understandability of patient education material generated by AI chatbots versus traditional Patient Information Leaflets (PILs), focusing on local anesthesia in eye surgery.Expert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources.MethodsExpert reviewers evaluated responses generated by AI chatbots (ChatGPT and Google Gemini) and a traditional PIL (Royal College of Anaesthetists' PIL) based on accuracy, completeness, readability, sentiment, and understandability. Statistical analyses, including ANOVA and Tukey HSD tests, were conducted to compare the performance of the sources.Readability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05).ResultsReadability analysis showed variations in complexity among the sources, with AI chatbots offering simplified language and PILs maintaining better overall readability and accessibility. Sentiment analysis revealed differences in emotional tone, with Google Gemini exhibiting the most positive sentiment. AI chatbots demonstrated superior understandability and actionability, while PILs excelled in completeness. Overall, ChatGPT showed slightly higher accuracy (scores expressed as mean ± standard deviation) (4.71 ± 0.5 vs 4.61 ± 0.62) and completeness (4.55 ± 0.58 vs 4.47 ± 0.58) compared to Google Gemini, but PILs performed best (4.84 ± 0.37 vs 4.88 ± 0.33) in terms of both accuracy and completeness (p-value for completeness <0.05).AI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care.ConclusionAI chatbots show promise as innovative tools for patient education, complementing traditional PILs. By leveraging the strengths of both AI-driven technologies and human expertise, healthcare providers can enhance patient education and empower individuals to make informed decisions about their health and medical care. |
Audience | Academic |
Author | Gondode, Prakash Bharti, Swati Garg, Neha Duggal, Sakshi Lohakare, Pooja Jakhar, Jubin Dewangan, Shraddha |
Author_xml | – sequence: 1 givenname: Prakash orcidid: 0000-0003-1014-8407 surname: Gondode fullname: Gondode, Prakash – sequence: 2 givenname: Sakshi orcidid: 0000-0002-8865-0854 surname: Duggal fullname: Duggal, Sakshi – sequence: 3 givenname: Neha orcidid: 0000-0003-4817-9807 surname: Garg fullname: Garg, Neha – sequence: 4 givenname: Pooja orcidid: 0009-0003-9190-9219 surname: Lohakare fullname: Lohakare, Pooja – sequence: 5 givenname: Jubin orcidid: 0000-0003-1137-334X surname: Jakhar fullname: Jakhar, Jubin – sequence: 6 givenname: Swati orcidid: 0000-0003-1549-902X surname: Bharti fullname: Bharti, Swati – sequence: 7 givenname: Shraddha orcidid: 0009-0003-8670-4198 surname: Dewangan fullname: Dewangan, Shraddha |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39183761$$D View this record in MEDLINE/PubMed |
BookMark | eNptkttq3DAQhk1JadI0F32BIuhNAtmNDrZl5W5Z0u3CQkOzvRaSLG202NZWsgP7sH2Xzh7SQ1oM9nj8zzfWzP82O-lCZ7PsPcFjSgshbrQP6zHj_FV2RgtSjlgh8Mkf8Wl2kdIaY0yqkuV5-SY7ZYJUjJfkLPsxDe1GRdX7J4smnWq2yScUHJoYM0Rlttfoq1W10r7xPbw82K73LdyukepqUPU-dMevt2gSe--88apB8663TeNXtjMWTR9Vr0Of0OUumt0v98WzEFaNRTPb-s5foScb05DQMqra76kNuof_gl4AcyG2apdFC6tcY4EFKbQIBmSTzqb-0SavkO_Q3daihyGubNy-y1471SR7cXyeZ98-3S2nn0eLL7P5dLIYGSZKPuK25EWtaOEEI0po7ZSD2FWalxrnXOgK0xImyIyuNGFKl7gyylBXUUEoY-fZ_MCtg1rLTfStilsZlJf7RIgrqWA0prHSCsZIoW1OeZ07Cl2FMa5ymOalFTkH1uWBtYnh-wAHk61PBmapOhuGJBkWnBQ0L0qQfnwhXYchwuBARTAhhWCc_latFPT3MMoeFruDyklFCOY4pzvW-D8quGpYjwHLOQ_5vwo-HJsPurX1r1M_ewsENweBiSGlaJ00vt_vEMi-kQTLvYHlzsASDAwVVy8qnqH_an8CRBLxcw |
CitedBy_id | crossref_primary_10_7759_cureus_69996 crossref_primary_10_4103_ija_ija_930_24 |
Cites_doi | 10.1093/her/cyh009 10.1016/j.pec.2014.05.027 10.1007/s40119-023-00347-0 10.1613/jair.4272 10.1097/UPJ.0000000000000490 10.1001/jama.2018.17163 10.1136/bmj.38926.629329.AE 10.3389/fpsyg.2023.1190326 10.1093/jamia/ocy174 10.1016/j.pec.2005.05.004 10.2147/PRBM.S314214 10.1136/bmj.39246.581169.80 10.1007/s00405-023-08319-9 |
ContentType | Journal Article |
Copyright | Copyright: © 2024 The Author(s). COPYRIGHT 2024 Ubiquity Press Ltd. 2024. This work is published under https://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: Copyright: © 2024 The Author(s). – notice: COPYRIGHT 2024 Ubiquity Press Ltd. – notice: 2024. This work is published under https://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | AAYXX CITATION NPM ABUWG AFKRA AZQEC BENPR CCPQU COVID DWQXO PHGZM PHGZT PIMPY PKEHL PQEST PQQKQ PQUKI PRINS 7X8 DOA |
DOI | 10.22599/bioj.377 |
DatabaseName | CrossRef PubMed ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One Community College Coronavirus Research Database ProQuest Central Korea ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef PubMed Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest One Academic Eastern Edition Coronavirus Research Database ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest Central China ProQuest Central ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) MEDLINE - Academic |
DatabaseTitleList | PubMed Publicly Available Content Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 2516-3590 |
EndPage | 192 |
ExternalDocumentID | oai_doaj_org_article_e93315be427d4f2da29ccf8f0246e947 A811070426 39183761 10_22599_bioj_377 |
Genre | Journal Article |
GroupedDBID | .0O AAFWJ AAPRH AAYXX ACCQO ACNCT ADBBV AFKRA AFPKN ALMA_UNASSIGNED_HOLDINGS BCNDV BENPR CCPQU CITATION EMOBN GROUPED_DOAJ IAO IHR IHW INH ITC OK1 PGMZT PHGZM PHGZT PIMPY RPM H13 NPM PMFND ABUWG AZQEC COVID DWQXO PKEHL PQEST PQQKQ PQUKI PRINS 7X8 PUEGO |
ID | FETCH-LOGICAL-c3967-7e675da25f931a9bbfaf5f9f8b76b0479b80260003cb8b13ab608cac2f8291233 |
IEDL.DBID | DOA |
ISSN | 2516-3590 1743-9868 |
IngestDate | Wed Aug 27 01:22:26 EDT 2025 Fri Jul 11 10:12:21 EDT 2025 Mon Jun 30 16:34:37 EDT 2025 Tue Jun 17 22:03:35 EDT 2025 Tue Jun 10 21:02:52 EDT 2025 Thu Apr 03 07:03:16 EDT 2025 Thu Apr 24 22:52:46 EDT 2025 Tue Jul 01 01:06:35 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | Local anesthetic Cataract AI Artificial Intelligence Readability Patient education handout |
Language | English |
License | Copyright: © 2024 The Author(s). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3967-7e675da25f931a9bbfaf5f9f8b76b0479b80260003cb8b13ab608cac2f8291233 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0009-0003-8670-4198 0000-0003-1549-902X 0000-0003-4817-9807 0000-0003-1137-334X 0000-0003-1014-8407 0000-0002-8865-0854 0009-0003-9190-9219 |
OpenAccessLink | https://doaj.org/article/e93315be427d4f2da29ccf8f0246e947 |
PMID | 39183761 |
PQID | 3101159372 |
PQPubID | 5236292 |
PageCount | 10 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_e93315be427d4f2da29ccf8f0246e947 proquest_miscellaneous_3097152456 proquest_journals_3101159372 gale_infotracmisc_A811070426 gale_infotracacademiconefile_A811070426 pubmed_primary_39183761 crossref_citationtrail_10_22599_bioj_377 crossref_primary_10_22599_bioj_377 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2024-00-00 |
PublicationDateYYYYMMDD | 2024-01-01 |
PublicationDate_xml | – year: 2024 text: 2024-00-00 |
PublicationDecade | 2020 |
PublicationPlace | England |
PublicationPlace_xml | – name: England – name: Sheffield |
PublicationTitle | British and Irish orthoptic journal |
PublicationTitleAlternate | Br Ir Orthopt J |
PublicationYear | 2024 |
Publisher | Ubiquity Press Ltd Ubiquity Press White Rose University Press |
Publisher_xml | – name: Ubiquity Press Ltd – name: Ubiquity Press – name: White Rose University Press |
References | (key20240819080145_B7) 2005; 20 (key20240819080145_B13) 2021; 14 (key20240819080145_B15) 2021; 6 (key20240819080145_B18) 2024; 11 (key20240819080145_B5) 2006; 333 (key20240819080145_B3) 2024; 281 key20240819080145_B21 (key20240819080145_B11) 2014; 16 (key20240819080145_B4) 2007; 335 (key20240819080145_B14) 2019; 26 (key20240819080145_B10) 2014; 50 (key20240819080145_B8) 2019; 9 (key20240819080145_B19) 2014; 96 (key20240819080145_B2) 2023; 9 key20240819080145_B1 (key20240819080145_B9) 2006; 61 key20240819080145_B6 (key20240819080145_B12) 2023; 14 (key20240819080145_B16) 2024; 13 key20240819080145_B17 (key20240819080145_B20) 2018; 320 |
References_xml | – volume: 20 start-page: 485 issue: 4 year: 2005 ident: key20240819080145_B7 article-title: ‘Why organizations continue to create patient information leaflets with readability and usability problems: an exploratory study’ publication-title: Health Education Research doi: 10.1093/her/cyh009 – volume: 9 year: 2023 ident: key20240819080145_B2 article-title: ‘Exploring the possible use of AI chatbots in public health education: feasibility study’ publication-title: JMIR Medical Education – ident: key20240819080145_B17 – volume: 96 start-page: 395 issue: 3 year: 2014 ident: key20240819080145_B19 article-title: ‘Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information’ publication-title: Patient Education and Counseling doi: 10.1016/j.pec.2014.05.027 – volume: 6 issue: 1 year: 2021 ident: key20240819080145_B15 article-title: ‘Cataract surgery practice patterns worldwide: a survey’ publication-title: BMJ Open Ophthalmology – volume: 13 start-page: 137 issue: 1 year: 2024 ident: key20240819080145_B16 article-title: ‘Can artificial intelligence improve the readability of patient education materials on aortic stenosis? A pilot study’ publication-title: Cardiology and Therapy doi: 10.1007/s40119-023-00347-0 – volume: 50 start-page: 723 year: 2014 ident: key20240819080145_B10 article-title: ‘Sentiment analysis of short informal texts’ publication-title: Journal of Artificial Intelligence Research doi: 10.1613/jair.4272 – volume: 9 issue: 4 year: 2019 ident: key20240819080145_B8 article-title: ‘Causability and explainability of artificial intelligence in medicine’ publication-title: Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery – volume: 11 start-page: 87 issue: 1 year: 2024 ident: key20240819080145_B18 article-title: ‘Comparison of ChatGPT and traditional patient education materials for men’s health’ publication-title: Urology Practice doi: 10.1097/UPJ.0000000000000490 – volume: 320 start-page: 2199 issue: 21 year: 2018 ident: key20240819080145_B20 article-title: Clinical decision support in the era of artificial intelligence’ publication-title: JAMA doi: 10.1001/jama.2018.17163 – ident: key20240819080145_B1 – volume: 333 start-page: 417 issue: 7565 year: 2006 ident: key20240819080145_B5 article-title: ‘Developing a quality criteria framework for patient decision aids: online international Delphi consensus process’ publication-title: BMJ doi: 10.1136/bmj.38926.629329.AE – volume: 14 start-page: 1190326 year: 2023 ident: key20240819080145_B12 article-title: ‘Detection of emotion by text analysis using machine learning’ publication-title: Frontiers in Psychology doi: 10.3389/fpsyg.2023.1190326 – volume: 16 issue: 7 year: 2014 ident: key20240819080145_B11 article-title: ‘Predictors of eHealth usage: insights on the digital divide from the Health Information National Trends Survey 2012’ publication-title: Journal of Medical Internet Research – volume: 26 start-page: 311 issue: 4 year: 2019 ident: key20240819080145_B14 article-title: ‘How scientists can take the lead in establishing ethical practices for social media research’ publication-title: Journal of the American Medical Informatics Association: JAMIA doi: 10.1093/jamia/ocy174 – volume: 61 start-page: 173 issue: 2 year: 2006 ident: key20240819080145_B9 article-title: ‘The role of pictures in improving health communication: a review of research on attention, comprehension, recall, and adherence’ publication-title: Patient Education and Counseling doi: 10.1016/j.pec.2005.05.004 – ident: key20240819080145_B6 – volume: 14 start-page: 781 year: 2021 ident: key20240819080145_B13 article-title: ‘Fear and anxiety associated with cataract surgery under local anesthesia in adults: a systematic review’ publication-title: Psychology Research and Behavior Management doi: 10.2147/PRBM.S314214 – volume: 335 start-page: 24 issue: 7609 year: 2007 ident: key20240819080145_B4 article-title: ‘Effectiveness of strategies for informing, educating, and involving patients’ publication-title: BMJ (Clinical research ed.) doi: 10.1136/bmj.39246.581169.80 – ident: key20240819080145_B21 – volume: 281 start-page: 985 issue: 2 year: 2024 ident: key20240819080145_B3 article-title: ‘Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard’ publication-title: European Archives of Oto-Rhino-Laryngology doi: 10.1007/s00405-023-08319-9 |
SSID | ssj0001863446 |
Score | 2.2574527 |
Snippet | Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role in informing and... Background and Aim: Eye surgeries often evoke strong negative emotions in patients, including fear and anxiety. Patient education material plays a crucial role... |
SourceID | doaj proquest gale pubmed crossref |
SourceType | Open Website Aggregation Database Index Database Enrichment Source |
StartPage | 183 |
SubjectTerms | Accuracy Age groups ai artificial intelligence Analysis Anesthesia Artificial intelligence cataract Caterers and catering Chatbots Comparative analysis Computational linguistics Confidentiality Empowerment Eye Eye surgery Human subjects Information services Language Language processing Large language models Local anesthesia local anesthetic Medical advice systems Natural language Natural language interfaces Online information services Online services Patient education patient education handout Professional ethics Professionals Readability Sentiment analysis Surgery Surgical outcomes Variance analysis |
SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1bb9MwFLagkxAviDthAxmExJCWLbFzsXlBXdV1oG2q2CbtLbIdZyuakq1pH_Zj-S-cE7stBcRbYp84OfHxudn-TMgHq7gGuxGHTNg8TFgZhdooG6YQ35Yq17o0uN_5-CQ7PE--XaQXPuHW-mWVC53YKeqyMZgj3wM3BJwXMKbsy81tiKdG4eyqP0LjPtkAFSxEj2zsD0_G31dZFpFxCHgcpBCIrpR7etL82OV5vmaIOrz-v7XyH75mZ3MOHpNH3lmkfde7T8g9Wz8lD479dPgz8nOwwu6mC3gR2lS0b8x8qszdDsVF8g6KG25OcWkQpgN3qKpLoOoSga72c_cahydBv_4G1EkHV2qmm1lLt_FqND7rHh41zeW1paMOnOQTxdUd85aC6SsnLr1Ixw6ylfodT1hKjyyeFgxtQRE9QjsK3w2_4Mq2E0UnNR3eWXrqtmo_J-cHw7PBYejPawgNl6BvcwvRR6lYWkkeK6l1pSq4roTOM41Q9logghnoEaOFjrnSWSSMMqwSTIIF5S9Ir25q-4rQyFhZljKyubWJyVPFFVOxAoFKBauSKCDbi84rjAczxzM1rgsIarp-LrCfC-jngLxfkt44BI9_Ee2jBCwJEHS7K2iml4Ufw4WVnMeptgnLy6RiwKk0phIVuDmZlQk08hHlp0DVAB8DnLkdDsASgmwVfYHBNgatAdlao4QhbdarFxJYeJXSFqsBEJB3y2p8EpfJ1baZAw0igqU4lx2Ql05ylyxxCdo7z-LX_298kzxkwJDLMm2R3mw6t2_A75rpt35w_QLi2zIG priority: 102 providerName: ProQuest |
Title | Comparative Analysis of Accuracy, Readability, Sentiment, and Actionability: Artificial Intelligence Chatbots (ChatGPT and Google Gemini) versus Traditional Patient Information Leaflets for Local Anesthesia in Eye Surgery |
URI | https://www.ncbi.nlm.nih.gov/pubmed/39183761 https://www.proquest.com/docview/3101159372 https://www.proquest.com/docview/3097152456 https://doaj.org/article/e93315be427d4f2da29ccf8f0246e947 |
Volume | 20 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3db9MwELfQkBAviPEZGJNBSAxpYY3zYZu3ruo60DZVbJP2ZtmOvRVNybS2D_tj-V-4s9PSAhIvvLn2pbFzl_vK-WdC3judG7AbWcqE42nB6l5qrHZpCfFtrbkxtcX9zscn1eF58fWivFg56gtrwiI8cHxwew4i7qw0rmC8LjyrNZPWeuHBtlROFmEfOdi8lWAqZFdElUOgE6GEQGSl3DOT9vunnPM1AxRw-v_Uxr_5mMHWHDwmjzonkfbj5DbJPdc8IQ-Ou8_gT8mPwS_MbrqAFaGtp31r57fa3u1SLI6PENzw4xRLgjANuEt1UwNVSADG0c_hNhFHgn5ZAeikgys9M-1sSnewNRqfhYtHbXt57egogJJ8pFjVMZ9SMHn1JKYV6ThCtdJupxP20iOHpwTDf0EXPUL7CfOGR3DlphNNJw0d3jl6GrdoPyPnB8OzwWHandOQ2lyCnuUOog5gTOllnmlpjNce2l4YXhmEsDcCkctAf1gjTJZrU_WE1ZZ5wSRYzvw52Wjaxr0ktGedrGvZc9y5wvJS55rpTIMglYL5opeQnQXzlO1AzPEsjWsFwUzgs0I-K-BzQt4tSW8icsffiPZRApYECLYdOkAEVSeC6l8imJAPKD8KVQJMBlYWdzbAkhBcS_UFBtkYrCZka40SXmW7PryQQNWpkqkC_xu8dvAiWULeLofxSiyPa1w7BxpEAivxG3ZCXkTJXS4pl6C1eZW9-h9LfU0eMmjHHNQW2Zjdzt0b8MpmZpvc3x-ejL9thxfxJxnkOvw |
linkProvider | Directory of Open Access Journals |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLZGJwEviDuFAQaBGNLCEjs3IyHUla4ta6dq66S9BdtxtqIpGb0I9U_xj_gvnBOnLQXE297S-MSJe47PzfZ3CHllJFdgNzyHxSZyfJa6jtLSOAHEt6mMlEo1nnfuH4adE__zaXC6QX4szsLgtsqFTiwVdVpozJHvghsCzgsYU_bx8puDVaNwdXVRQsOKxYGZf4eQbfKh-wn4-5qx_daw2XGqqgKO5gK0QmTAR04lCzLBPSmUymQG11msolAh4LqKEWcLpF2rWHlcqtCNtdQsi5kAPc-h32tk0-ehy2pkc691ODhaZXXikEOAZSGMYKoIsatGxdd3PIrWDF9ZH-BvK_CHb1vauP3b5FblnNKGlaY7ZMPkd8n1frX8fo_8bK6wwukCzoQWGW1oPRtLPd-huCnfQn_Dj2PcioTpxx0q8xSoysSjbX1fvsbiV9Dub8CgtHkup6qYTug2XrUHw_LhdlGcXRjaLsFQ3lLcTTKbUDC16cimM-nAQsTS6oQV3qU9g9WJoS-4RXtot-G74S84N5ORpKOctuaGHtuj4ffJyZVw8gGp5UVuHhHqaiPSVLgmMsbXUSC5ZNKTIMBBzDLfrZPtBfMSXYGnYw2PiwSCqJLPCfI5AT7Xycsl6aVFDPkX0R5KwJIAQb7LG8X4LKl0RmIE516gjM-i1M8YjFRoncUZuFWhET508gblJ0FVBB8DI7MnKmBICOqVNGIM7jFIrpOtNUpQIXq9eSGBSaXCJslqwtXJi2UzPonb8nJTzIAGEcgCXDuvk4dWcpdD4gKsRRR6j__f-XNyozPs95Je9_DgCbnJYHA2w7VFatPxzDwFn2-qnlUTjZIvVz23fwE8sG4x |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Comparative+Analysis+of+Accuracy%2C+Readability%2C+Sentiment%2C+and+Actionability%3A+Artificial+Intelligence+Chatbots+%28ChatGPT+and+Google+Gemini%29+versus+Traditional+Patient+Information+Leaflets+for+Local+Anesthesia+in+Eye+Surgery&rft.jtitle=British+and+Irish+orthoptic+journal&rft.au=Gondode%2C+Prakash&rft.au=Duggal%2C+Sakshi&rft.au=Garg%2C+Neha&rft.au=Lohakare%2C+Pooja&rft.date=2024&rft.issn=1743-9868&rft.volume=20&rft.issue=1&rft.spage=183&rft_id=info:doi/10.22599%2Fbioj.377&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2516-3590&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2516-3590&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2516-3590&client=summon |