Building Trust: Developing an Ethical Communication Framework for Navigating Artificial Intelligence Discussions and Addressing Potential Patient Concerns
Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT continually redefine the boundaries of medicine, the advent of Amazon Web Services (AWS) HealthScribe on July 26th, 2023, heralds a transformative mome...
Saved in:
Published in | Blood Vol. 142; no. Supplement 1; p. 7229 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
02.11.2023
|
Online Access | Get full text |
ISSN | 0006-4971 1528-0020 |
DOI | 10.1182/blood-2023-190943 |
Cover
Loading…
Abstract | Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT continually redefine the boundaries of medicine, the advent of Amazon Web Services (AWS) HealthScribe on July 26th, 2023, heralds a transformative moment in healthcare. This HIPAA-eligible generative AI service, capable of transcribing patient-provider conversations and automatically entering them into an electronic health record (EHR) system, represents a profound intersection of technology and medical practice. As this technology permeates clinical settings, addressing associated patient concerns with AI becomes paramount.
Although providers may grasp the intricacies of such technology swiftly, patients are likely to harbor concerns. Patients may be unfamiliar with the technology or question its safety. Clear and precise communication will be essential for physicians to ease patient concerns. Our team interviewed forty-eight subjects to survey their understanding, concerns, and opinions on artificial intelligence. We then classified and visually charted their responses. Based on our data, we created a framework for ethical communication physicians can follow when talking to patients about using artificial intelligence in clinical settings.
Method: A multidisciplinary team encompassing physicians, advanced practice providers specializing in hematology and oncology, and bioethicists engaged with forty-eight subjects. The study population represented a diverse cross-section of society, differing in aspects such as age, sex assigned at birth, political orientation, education, income, ethnicity, occupation, and religious affiliation.
Our investigative process included a structured interview containing twenty-five foundational questions. No personal health information was collected during the questionnaire, and all questions sought only to garner subject opinion. Questions were designed to probe areas of interest such as: (1) the subject's general feelings about AI in healthcare, (2) their familiarity with AI technologies, (3) the existence of any specific concerns, and (4) a deeper exploration of those concerns. The responses were meticulously collected, categorized, and analyzed to discern emergent trends.
Results: Our team developed The TRUST Framework based on our research findings by identifying and addressing the three primary concerns with AI in healthcare; transparency, confidentiality, and consent.
The data presented a mosaic of varied opinions, with minimal discernible trends correlating with specific demographic attributes. One of the subtle patterns identified pertained to age. Subjects within the 12-35 age bracket demonstrated a familiarity with artificial intelligence, generally expressing indifference rather than concern or enthusiasm. Those aged 35-65 exhibited greater pronounced indifference than any age demographic and revealed an unfamiliarity with existing artificial intelligence tools. Conversely, respondents aged 65 and older expressed the highest level of concern and a prevalent unfamiliarity with the range of available artificial intelligence applications.
Conclusion: Effective communication when introducing the use of artificial intelligence in healthcare settings is imperative. This conclusion is based on abstract reasoning and rooted in real-world feedback received from a diverse subject population. Our research revealed that subjects have varying feelings and concerns with artificial intelligence, irrespective of their background. Therefore, avoiding unconscious bias and having a framework for communicating how AI will be used is essential.
As the frontier of AI continues to expand, the need for ethical and transparent communication will only grow. This study serves as a call to action for healthcare providers to commit to clear, honest, and empathetic communication about AI's role in patient care. Doing so will promote the overall acceptance of AI in healthcare, subsequently enhancing patient outcomes and alleviating patient concerns by building trust.
Locantore-Ford:Cardinal Health: Consultancy, Honoraria; Accumen: Consultancy, Current Employment, Membership on an entity's Board of Directors or advisory committees.
[Display omitted] |
---|---|
AbstractList | Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT continually redefine the boundaries of medicine, the advent of Amazon Web Services (AWS) HealthScribe on July 26th, 2023, heralds a transformative moment in healthcare. This HIPAA-eligible generative AI service, capable of transcribing patient-provider conversations and automatically entering them into an electronic health record (EHR) system, represents a profound intersection of technology and medical practice. As this technology permeates clinical settings, addressing associated patient concerns with AI becomes paramount.
Although providers may grasp the intricacies of such technology swiftly, patients are likely to harbor concerns. Patients may be unfamiliar with the technology or question its safety. Clear and precise communication will be essential for physicians to ease patient concerns. Our team interviewed forty-eight subjects to survey their understanding, concerns, and opinions on artificial intelligence. We then classified and visually charted their responses. Based on our data, we created a framework for ethical communication physicians can follow when talking to patients about using artificial intelligence in clinical settings.
Method: A multidisciplinary team encompassing physicians, advanced practice providers specializing in hematology and oncology, and bioethicists engaged with forty-eight subjects. The study population represented a diverse cross-section of society, differing in aspects such as age, sex assigned at birth, political orientation, education, income, ethnicity, occupation, and religious affiliation.
Our investigative process included a structured interview containing twenty-five foundational questions. No personal health information was collected during the questionnaire, and all questions sought only to garner subject opinion. Questions were designed to probe areas of interest such as: (1) the subject's general feelings about AI in healthcare, (2) their familiarity with AI technologies, (3) the existence of any specific concerns, and (4) a deeper exploration of those concerns. The responses were meticulously collected, categorized, and analyzed to discern emergent trends.
Results: Our team developed The TRUST Framework based on our research findings by identifying and addressing the three primary concerns with AI in healthcare; transparency, confidentiality, and consent.
The data presented a mosaic of varied opinions, with minimal discernible trends correlating with specific demographic attributes. One of the subtle patterns identified pertained to age. Subjects within the 12-35 age bracket demonstrated a familiarity with artificial intelligence, generally expressing indifference rather than concern or enthusiasm. Those aged 35-65 exhibited greater pronounced indifference than any age demographic and revealed an unfamiliarity with existing artificial intelligence tools. Conversely, respondents aged 65 and older expressed the highest level of concern and a prevalent unfamiliarity with the range of available artificial intelligence applications.
Conclusion: Effective communication when introducing the use of artificial intelligence in healthcare settings is imperative. This conclusion is based on abstract reasoning and rooted in real-world feedback received from a diverse subject population. Our research revealed that subjects have varying feelings and concerns with artificial intelligence, irrespective of their background. Therefore, avoiding unconscious bias and having a framework for communicating how AI will be used is essential.
As the frontier of AI continues to expand, the need for ethical and transparent communication will only grow. This study serves as a call to action for healthcare providers to commit to clear, honest, and empathetic communication about AI's role in patient care. Doing so will promote the overall acceptance of AI in healthcare, subsequently enhancing patient outcomes and alleviating patient concerns by building trust.
Locantore-Ford:Cardinal Health: Consultancy, Honoraria; Accumen: Consultancy, Current Employment, Membership on an entity's Board of Directors or advisory committees.
[Display omitted] Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT continually redefine the boundaries of medicine, the advent of Amazon Web Services (AWS) HealthScribe on July 26th, 2023, heralds a transformative moment in healthcare. This HIPAA-eligible generative AI service, capable of transcribing patient-provider conversations and automatically entering them into an electronic health record (EHR) system, represents a profound intersection of technology and medical practice. As this technology permeates clinical settings, addressing associated patient concerns with AI becomes paramount. Although providers may grasp the intricacies of such technology swiftly, patients are likely to harbor concerns. Patients may be unfamiliar with the technology or question its safety. Clear and precise communication will be essential for physicians to ease patient concerns. Our team interviewed forty-eight subjects to survey their understanding, concerns, and opinions on artificial intelligence. We then classified and visually charted their responses. Based on our data, we created a framework for ethical communication physicians can follow when talking to patients about using artificial intelligence in clinical settings. Method: A multidisciplinary team encompassing physicians, advanced practice providers specializing in hematology and oncology, and bioethicists engaged with forty-eight subjects. The study population represented a diverse cross-section of society, differing in aspects such as age, sex assigned at birth, political orientation, education, income, ethnicity, occupation, and religious affiliation. Our investigative process included a structured interview containing twenty-five foundational questions. No personal health information was collected during the questionnaire, and all questions sought only to garner subject opinion. Questions were designed to probe areas of interest such as: (1) the subject's general feelings about AI in healthcare, (2) their familiarity with AI technologies, (3) the existence of any specific concerns, and (4) a deeper exploration of those concerns. The responses were meticulously collected, categorized, and analyzed to discern emergent trends. Results: Our team developed The TRUST Framework based on our research findings by identifying and addressing the three primary concerns with AI in healthcare; transparency, confidentiality, and consent. The data presented a mosaic of varied opinions, with minimal discernible trends correlating with specific demographic attributes. One of the subtle patterns identified pertained to age. Subjects within the 12-35 age bracket demonstrated a familiarity with artificial intelligence, generally expressing indifference rather than concern or enthusiasm. Those aged 35-65 exhibited greater pronounced indifference than any age demographic and revealed an unfamiliarity with existing artificial intelligence tools. Conversely, respondents aged 65 and older expressed the highest level of concern and a prevalent unfamiliarity with the range of available artificial intelligence applications. Conclusion: Effective communication when introducing the use of artificial intelligence in healthcare settings is imperative. This conclusion is based on abstract reasoning and rooted in real-world feedback received from a diverse subject population. Our research revealed that subjects have varying feelings and concerns with artificial intelligence, irrespective of their background. Therefore, avoiding unconscious bias and having a framework for communicating how AI will be used is essential. As the frontier of AI continues to expand, the need for ethical and transparent communication will only grow. This study serves as a call to action for healthcare providers to commit to clear, honest, and empathetic communication about AI's role in patient care. Doing so will promote the overall acceptance of AI in healthcare, subsequently enhancing patient outcomes and alleviating patient concerns by building trust. |
Author | Ford, Douglas William Tisoskey, Scott Patrick Locantore-Ford, Patricia A. |
Author_xml | – sequence: 1 givenname: Douglas William surname: Ford fullname: Ford, Douglas William organization: Harvard Medical School, Harvard University, Cambridge, MA – sequence: 2 givenname: Scott Patrick surname: Tisoskey fullname: Tisoskey, Scott Patrick organization: Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA – sequence: 3 givenname: Patricia A. surname: Locantore-Ford fullname: Locantore-Ford, Patricia A. organization: Abramson Cancer Center, Pennsylvania Hospital, Philadelphia, PA |
BookMark | eNp9kMlOwzAQhi1UJNrCA3DzCwS8ZDOcShdAqqCH3iPXnhRDYld2UsSr8LQ4lDOn2fT9M_NP0Mg6CwhdU3JDaclud41zOmGE8YQKIlJ-hsY0Y2VCCCMjNCaE5EkqCnqBJiG8E0JTzrIx-n7oTaON3eOt70N3hxdwhMYdho60eNm9GSUbPHdt29uYdsZZvPKyhU_nP3DtPH6RR7OPg0jMfGdqo0wknm0HTWP2YBXghQmqDyGyIapqPNPaQ6wjsnEd2G4gNlEjpnFXRLwNl-i8lk2Aq784RdvVcjt_Stavj8_z2TpR8c2kUHlNZJ5JzrnOeFEIKQFkJnayFhQ45CVLs1zmOyJKoDIvcglclpypVGSs4FNET7LKuxA81NXBm1b6r4qSavC2-vW2GrytTt5G5v7EQLzraMBXQZnhU208qK7SzvxD_wC1YId4 |
ContentType | Journal Article |
Copyright | 2023 The American Society of Hematology |
Copyright_xml | – notice: 2023 The American Society of Hematology |
DBID | AAYXX CITATION |
DOI | 10.1182/blood-2023-190943 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Chemistry Biology Anatomy & Physiology |
EISSN | 1528-0020 |
EndPage | 7229 |
ExternalDocumentID | 10_1182_blood_2023_190943 S0006497123138250 |
GroupedDBID | --- -~X .55 0R~ 1CY 23N 2WC 34G 39C 4.4 53G 5GY 5RE 6J9 AAEDW AALRI AAXUO ABOCM ACGFO ADBBV AENEX AFETI AFOSN AITUG AKRWK ALMA_UNASSIGNED_HOLDINGS AMRAJ BAWUL BTFSW CS3 DIK DU5 E3Z EBS EJD EX3 F5P FDB FRP GS5 GX1 IH2 K-O KQ8 L7B LSO MJL N9A OK1 P2P R.V RHF RHI ROL SJN THE TR2 TWZ W2D WH7 WOQ WOW X7M YHG YKV 5VS AAYWO AAYXX ACVFH ADCNI AEUPX AFPUW AGCQF AIGII AKBMS AKYEP CITATION H13 W8F |
ID | FETCH-LOGICAL-c943-7c6f0a65a333d53779aaeea59baf91e3e682456a6b098e1a676ae3a832c495273 |
ISSN | 0006-4971 |
IngestDate | Tue Jul 01 02:45:11 EDT 2025 Sat Oct 26 15:42:31 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | Supplement 1 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c943-7c6f0a65a333d53779aaeea59baf91e3e682456a6b098e1a676ae3a832c495273 |
PageCount | 1 |
ParticipantIDs | crossref_primary_10_1182_blood_2023_190943 elsevier_sciencedirect_doi_10_1182_blood_2023_190943 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-11-02 |
PublicationDateYYYYMMDD | 2023-11-02 |
PublicationDate_xml | – month: 11 year: 2023 text: 2023-11-02 day: 02 |
PublicationDecade | 2020 |
PublicationTitle | Blood |
PublicationYear | 2023 |
Publisher | Elsevier Inc |
Publisher_xml | – name: Elsevier Inc |
SSID | ssj0014325 |
Score | 2.4349036 |
Snippet | Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT... |
SourceID | crossref elsevier |
SourceType | Index Database Publisher |
StartPage | 7229 |
Title | Building Trust: Developing an Ethical Communication Framework for Navigating Artificial Intelligence Discussions and Addressing Potential Patient Concerns |
URI | https://dx.doi.org/10.1182/blood-2023-190943 |
Volume | 142 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9NAEF6FIh4XBCmI8tIeEAciF3vXWdvc0gZUHq16CFJv1ni9liLARomDBD-FP8VfYvZhe0NbiXKI5VjZiZP5vDM7-80MIc8VFBJkAoEQKS5QuEwCUJEK8MXKAu1vlOl85-MTcfQpfn82PRuNfnuspU1b7MufF-aV_I9W8RrqVWfJXkGzvVC8gOeoXzyihvH4Tzo-cD2tJwudOaEX9_MhBwofXENmN9EBLwtE-6qWj2Uohifw3VTZ0NGRlSEO2fIbXqXO-XItN5otW9uCzrOyNOxZHHLatJpuhCNObYFWnUIo1crFALvd4i-uJb3rB1J6rntXF6IPISzXzfqzo6CZwhG2i0CfUfSx0WhoViroJNkPLGEy2_eDGIybbD62NTEL3ezOwky5uVgXzw5ZuDVZx8xDpel8asKok8ibhRPmoihq6-15a5Hq6rM2Q8DeVaaJloNp7OgAf1nMnsdoVlApy42IXIvIrYhr5DrDdYtuqTF_96Hf1oo5sy013K912-wo4tW5u7jYUfKcn8VdcsetWujMQvAeGal6THZnNbTN1x_0BTU8YrNBMyY3DrqzW4ddN8ExuXnsSBy75FcHW2pg-5oOoKVQUwdaugVa2oOWImjpAFo6gJb6oKUeaFFqSQfQ0h601IGWdqC9TxZv3ywOjwLXIiSQ-BcFiRRVCGIKnPNyqmtnAigF06yAKosUVyLVG_sgijBLVQQiEaA4oBWTcaZLDz4gO3VTq4eERkrEacYKjgNjBZCGFQcoUVRYVFxWe-Rlp478my0Ek1-q_j0SdwrLnSdrPdQcoXf5sEdX-Y7H5PbwJD0hO-1qo56ig9wWzwzo_gDJCsKj |
linkProvider | Flying Publisher |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Building+Trust%3A+Developing+an+Ethical+Communication+Framework+for+Navigating+Artificial+Intelligence+Discussions+and+Addressing+Potential+Patient+Concerns&rft.jtitle=Blood&rft.au=Ford%2C+Douglas+William&rft.au=Tisoskey%2C+Scott+Patrick&rft.au=Locantore-Ford%2C+Patricia+A.&rft.date=2023-11-02&rft.issn=0006-4971&rft.eissn=1528-0020&rft.volume=142&rft.issue=Supplement+1&rft.spage=7229&rft.epage=7229&rft_id=info:doi/10.1182%2Fblood-2023-190943&rft.externalDBID=n%2Fa&rft.externalDocID=10_1182_blood_2023_190943 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0006-4971&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0006-4971&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0006-4971&client=summon |