Accelerating the integration of ChatGPT and other large‐scale AI models into biomedical research and healthcare
Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain these models because of their complexity and resource requirements. As a result, they are still inaccess...
Saved in:
Published in | MedComm - Future medicine Vol. 2; no. 2 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
London
John Wiley & Sons, Inc
01.06.2023
Wiley |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain these models because of their complexity and resource requirements. As a result, they are still inaccessible to healthcare industries and clinicians. This situation might soon be changed because of advancements in graphics processing unit (GPU) programming and parallel computing. More importantly, leveraging existing large‐scale AIs such as GPT‐4 and Med‐PaLM and integrating them into multiagent models (e.g., Visual‐ChatGPT) will facilitate real‐world implementations. This review aims to raise awareness of the potential applications of these models in healthcare. We provide a general overview of several advanced large‐scale AI models, including language models, vision‐language models, graph learning models, language‐conditioned multiagent models, and multimodal embodied models. We discuss their potential medical applications in addition to the challenges and future directions. Importantly, we stress the need to align these models with human values and goals, such as using reinforcement learning from human feedback, to ensure that they provide accurate and personalized insights that support human decision‐making and improve healthcare outcomes.
This review provides an overview of large‐scale AI models, including language models (e.g., ChatGPT), vision‐language models, and language‐conditioned multiagent models, and discusses their potential applications in medicine, as well as their limitations and future trends. We also propose how large‐scale AI models can be integrated into various scenarios of clinical applications. |
---|---|
AbstractList | Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain these models because of their complexity and resource requirements. As a result, they are still inaccessible to healthcare industries and clinicians. This situation might soon be changed because of advancements in graphics processing unit (GPU) programming and parallel computing. More importantly, leveraging existing large‐scale AIs such as GPT‐4 and Med‐PaLM and integrating them into multiagent models (e.g., Visual‐ChatGPT) will facilitate real‐world implementations. This review aims to raise awareness of the potential applications of these models in healthcare. We provide a general overview of several advanced large‐scale AI models, including language models, vision‐language models, graph learning models, language‐conditioned multiagent models, and multimodal embodied models. We discuss their potential medical applications in addition to the challenges and future directions. Importantly, we stress the need to align these models with human values and goals, such as using reinforcement learning from human feedback, to ensure that they provide accurate and personalized insights that support human decision‐making and improve healthcare outcomes. Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain these models because of their complexity and resource requirements. As a result, they are still inaccessible to healthcare industries and clinicians. This situation might soon be changed because of advancements in graphics processing unit (GPU) programming and parallel computing. More importantly, leveraging existing large‐scale AIs such as GPT‐4 and Med‐PaLM and integrating them into multiagent models (e.g., Visual‐ChatGPT) will facilitate real‐world implementations. This review aims to raise awareness of the potential applications of these models in healthcare. We provide a general overview of several advanced large‐scale AI models, including language models, vision‐language models, graph learning models, language‐conditioned multiagent models, and multimodal embodied models. We discuss their potential medical applications in addition to the challenges and future directions. Importantly, we stress the need to align these models with human values and goals, such as using reinforcement learning from human feedback, to ensure that they provide accurate and personalized insights that support human decision‐making and improve healthcare outcomes. This review provides an overview of large‐scale AI models, including language models (e.g., ChatGPT), vision‐language models, and language‐conditioned multiagent models, and discusses their potential applications in medicine, as well as their limitations and future trends. We also propose how large‐scale AI models can be integrated into various scenarios of clinical applications. Abstract Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain these models because of their complexity and resource requirements. As a result, they are still inaccessible to healthcare industries and clinicians. This situation might soon be changed because of advancements in graphics processing unit (GPU) programming and parallel computing. More importantly, leveraging existing large‐scale AIs such as GPT‐4 and Med‐PaLM and integrating them into multiagent models (e.g., Visual‐ChatGPT) will facilitate real‐world implementations. This review aims to raise awareness of the potential applications of these models in healthcare. We provide a general overview of several advanced large‐scale AI models, including language models, vision‐language models, graph learning models, language‐conditioned multiagent models, and multimodal embodied models. We discuss their potential medical applications in addition to the challenges and future directions. Importantly, we stress the need to align these models with human values and goals, such as using reinforcement learning from human feedback, to ensure that they provide accurate and personalized insights that support human decision‐making and improve healthcare outcomes. |
Author | Ye, Jin‐Guo Feng, Long‐Yu Zou, Jin‐Gen Zheng, Ying‐Feng Wang, Ding‐Qiao |
Author_xml | – sequence: 1 givenname: Ding‐Qiao surname: Wang fullname: Wang, Ding‐Qiao organization: Sun Yat‐Sen University – sequence: 2 givenname: Long‐Yu surname: Feng fullname: Feng, Long‐Yu organization: Sun Yat‐Sen University – sequence: 3 givenname: Jin‐Guo surname: Ye fullname: Ye, Jin‐Guo organization: Sun Yat‐Sen University – sequence: 4 givenname: Jin‐Gen surname: Zou fullname: Zou, Jin‐Gen organization: Beijing Institute of Technology – sequence: 5 givenname: Ying‐Feng orcidid: 0000-0002-9952-6445 surname: Zheng fullname: Zheng, Ying‐Feng email: zhyfeng@mail.sysu.edu.cn organization: Sun Yat‐Sen University |
BookMark | eNp1kc1u1DAUhS1UJEqpeAVLLFigFMdO_LMcjfozUhEsytq6ca4nHmXi1naFuuMReEaehGSmSAjExvb1_c7R0b2vyckUJyTkbc0uasb4xz16ftGIF-SUK2kq2bTy5I_3K3Ke847NpFaCK35KHlbO4YgJSpi2tAxIw1Rwu9RxotHT9QDl-ssdhamnce4nOkLa4s_vP7KDEelqQ_exxzEvwki7EPfYh7lFE2aE5IaDdEAYy-Ag4Rvy0sOY8fz5PiNfry7v1jfV7efrzXp1WznBhKi6jjMGptYCOgZMeK-FMvPR9I2STKJuW_SgjGtbDnXPea-Z0b30aLgEIc7I5ujbR9jZ-xT2kJ5shGAPHzFtLaQS3Ii2670TmkllQDbaOI2ae8dc13ZKGN_NXu-OXvcpPjxiLnYXH9M0x7eCGaZVy1s9U9WRcinmnNBbF8phkCVBGG3N7LIkuyzJNkvC93_xv1P-S344kt_CiE__w-ynyys-078AIwqiCg |
CitedBy_id | crossref_primary_10_1016_j_socimp_2024_100040 crossref_primary_10_1051_itmconf_20246000004 crossref_primary_10_1097_JCMA_0000000000000961 crossref_primary_10_1038_s41591_024_03359_y crossref_primary_10_1002_slct_202304359 crossref_primary_10_1111_medu_15402 crossref_primary_10_1016_j_compbiomed_2024_108709 crossref_primary_10_1016_j_imu_2023_101304 crossref_primary_10_1007_s10639_024_12960_0 crossref_primary_10_1016_j_glmedi_2024_100081 crossref_primary_10_1016_j_jtcvs_2025_01_022 crossref_primary_10_2196_58041 crossref_primary_10_1002_aisy_202400429 crossref_primary_10_1016_j_ipm_2024_103743 crossref_primary_10_3389_fdmed_2024_1456208 crossref_primary_10_7759_cureus_57795 crossref_primary_10_1097_JS9_0000000000001875 crossref_primary_10_1124_molpharm_124_000871 crossref_primary_10_1109_MC_2023_3285414 crossref_primary_10_2196_56764 crossref_primary_10_3390_biomedinformatics4020062 crossref_primary_10_1016_j_inffus_2025_103033 crossref_primary_10_1109_RBME_2024_3496744 crossref_primary_10_1007_s44267_024_00065_8 crossref_primary_10_1016_j_jksuci_2024_101933 crossref_primary_10_1080_13658816_2024_2397441 crossref_primary_10_1108_JMTM_02_2024_0075 crossref_primary_10_4236_ojbm_2024_121009 crossref_primary_10_1002_mef2_49 crossref_primary_10_5423_RPD_2024_30_1_99 crossref_primary_10_1016_j_jdent_2024_104840 crossref_primary_10_33393_ao_2023_2618 crossref_primary_10_1002_mco2_70031 crossref_primary_10_1007_s43681_025_00672_1 crossref_primary_10_1111_dom_15463 crossref_primary_10_1186_s12859_024_06008_w crossref_primary_10_1002_mco2_769 crossref_primary_10_1016_j_metrad_2023_100022 crossref_primary_10_3389_fpubh_2024_1457131 crossref_primary_10_1016_j_heliyon_2024_e31397 crossref_primary_10_1038_s41551_023_01045_x crossref_primary_10_1007_s00266_023_03660_0 crossref_primary_10_1089_tmj_2023_0704 crossref_primary_10_1093_jamiaopen_ooae109 crossref_primary_10_1002_mco2_70043 crossref_primary_10_1111_bcp_16215 crossref_primary_10_1002_hsr2_1684 crossref_primary_10_1016_j_omtn_2023_06_019 crossref_primary_10_1016_j_metrad_2023_100034 crossref_primary_10_1097_ICU_0000000000001035 crossref_primary_10_1016_j_neucom_2024_127324 |
Cites_doi | 10.1001/jama.2020.12067 10.1038/s41598-022-15496-w 10.18653/v1/2020.emnlp-main.743 10.1007/978-1-4842-4470-8_2 10.1109/ICACCS54159.2022.9785166 10.1109/JBHI.2022.3207502 10.5220/0010341906590666 10.1186/s13321-023-00694-z 10.1016/j.ophtha.2022.02.017 10.1021/acs.jmedchem.2c00991 10.1197/jamia.M2562 10.1016/j.jbi.2021.103982 10.1038/s42256-022-00580-7 10.1056/NEJMp1703370 10.1093/bioinformatics/btac020 10.1093/bioinformatics/btz682 10.1016/j.aiopen.2021.01.001 10.1097/SLA.0000000000002665 10.1609/aaai.v33i01.33014602 10.1038/s41586-020-2669-y 10.1038/s41586-021-03819-2 10.1038/s41467-020-19784-9 10.1145/3307339.3342186 10.1109/MSP.2017.2765202 10.1145/3394486.3406703 10.3390/biom12111709 10.1016/S2589-7500(20)30275-2 10.1038/s41592-021-01252-x 10.1162/neco.1997.9.8.1735 10.1016/j.cmpb.2014.09.005 10.1038/s42256-022-00499-z 10.1038/s41467-022-32007-7 10.1093/bib/bbab564 10.1371/journal.pdig.0000198 10.1016/j.compmedimag.2009.07.007 10.1093/bioadv/vbac023 10.1002/aisy.202000071 10.1093/bioinformatics/btab083 10.1016/j.artmed.2021.102032 10.48550/arXiv.2206.07682 10.1038/s41746-020-00322-2 10.1093/bib/bbad079 10.3390/s20072033 10.1109/IROS51168.2021.9636627 10.1109/MCAS.2006.1688199 10.1109/TETCI.2022.3141105 10.1177/1050651920958507 |
ContentType | Journal Article |
Copyright | 2023 The Authors. published by John Wiley & Sons Australia, Ltd on behalf of Sichuan International Medical Exchange & Promotion Association (SCIMEA). 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2023 The Authors. published by John Wiley & Sons Australia, Ltd on behalf of Sichuan International Medical Exchange & Promotion Association (SCIMEA). – notice: 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 24P AAYXX CITATION 3V. 7X7 7XB 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU COVID DWQXO FYUFA GHDGH K9. M0S PHGZM PHGZT PIMPY PKEHL PQEST PQQKQ PQUKI PRINS DOA |
DOI | 10.1002/mef2.43 |
DatabaseName | Wiley Online Library Open Access CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials - QC ProQuest Central ProQuest One Community College Coronavirus Research Database ProQuest Central Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni) ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Health & Medical Complete (Alumni) Coronavirus Research Database ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Central China ProQuest Hospital Collection (Alumni) ProQuest Central ProQuest Health & Medical Complete Health Research Premium Collection ProQuest One Academic UKI Edition Health and Medicine Complete (Alumni Edition) ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) |
DatabaseTitleList | Publicly Available Content Database CrossRef |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher – sequence: 3 dbid: BENPR name: ProQuest Central Database Suite (ProQuest) url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 2769-6456 |
EndPage | n/a |
ExternalDocumentID | oai_doaj_org_article_bdfc380679a6489c8e82fc0cb5b739fb 10_1002_mef2_43 MEF243 |
Genre | reviewArticle |
GrantInformation_xml | – fundername: National Natural Science Foundation of China funderid: 82171034 – fundername: National Key R&D Program of China funderid: 2022YFC2502802 – fundername: The High‐level Hospital Construction Project, Zhongshan Ophthalmic Center, Sun Yat‐sen University funderid: 303010303058; 303020107; 303020108 |
GroupedDBID | 0R~ 24P 7X7 8FI 8FJ ABUWG ACCMX ADPDF AFKRA ALMA_UNASSIGNED_HOLDINGS BENPR CCPQU EBS FYUFA GROUPED_DOAJ HMCUK M~E PIMPY TEORI UKHRP AAYXX CITATION PHGZM PHGZT 3V. 7XB 8FK AAMMB AEFGJ AGXDD AIDQK AIDYY AZQEC COVID DWQXO K9. PKEHL PQEST PQQKQ PQUKI PRINS WIN PUEGO |
ID | FETCH-LOGICAL-c3033-bb200a9183ab0a03ff8379f834d47606e855efa79c552a1d22d8098d6fe926a33 |
IEDL.DBID | 7X7 |
ISSN | 2769-6456 |
IngestDate | Wed Aug 27 01:32:09 EDT 2025 Wed Aug 13 11:14:24 EDT 2025 Thu Apr 24 23:04:07 EDT 2025 Tue Jul 01 02:56:18 EDT 2025 Wed Jan 22 16:22:07 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Language | English |
License | Attribution |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c3033-bb200a9183ab0a03ff8379f834d47606e855efa79c552a1d22d8098d6fe926a33 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-9952-6445 |
OpenAccessLink | https://www.proquest.com/docview/3090875258?pq-origsite=%requestingapplication% |
PQID | 3090875258 |
PQPubID | 6860392 |
PageCount | 28 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_bdfc380679a6489c8e82fc0cb5b739fb proquest_journals_3090875258 crossref_citationtrail_10_1002_mef2_43 crossref_primary_10_1002_mef2_43 wiley_primary_10_1002_mef2_43_MEF243 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | June 2023 2023-06-00 20230601 2023-06-01 |
PublicationDateYYYYMMDD | 2023-06-01 |
PublicationDate_xml | – month: 06 year: 2023 text: June 2023 |
PublicationDecade | 2020 |
PublicationPlace | London |
PublicationPlace_xml | – name: London |
PublicationTitle | MedComm - Future medicine |
PublicationYear | 2023 |
Publisher | John Wiley & Sons, Inc Wiley |
Publisher_xml | – name: John Wiley & Sons, Inc – name: Wiley |
References | 2022; 2212 2022; 2211 2022; 2210 2023; 2303 2020; 20 2023; 2302 2016; 1606 2022; 23 2020; 324 2019; 19 2017; 1707 2022; 26 2023; 2 2022; 65 2020; 11 1997; 9 2017; 9 2022; 2204 2022; 2203 2021; 35 2022; 2202 2017; 30 2022; 2201 2021; 37 2022; 162 2023; 24 2020; 3 2020; 2 2020; 1 2017; 70 2021; 113 2021; 596 2022; 2209 2022; 2207 2022; 2205 2022; 129 2022; 126 2022; 38 2018; 35 2010; 34 2014; 117 2021; 2104 2020; 2010 2019; 1904 2021; 2106 2021; 3 2023; 2206 2021; 2108 2017; 2017 2023; 15 2019; 33 2019; 32 2019; 1909 2019; 269 2008; 15 2020; 36 2006; 6 2020; 585 2020; 33 2015; 151511 2020; 2004 2017; 377 2018; 1810 2020; 2006 2021; 2110 2021; 2112 2013; 1301 2022; 182 2023 2022 2022; 4 2021 2022; 6 2020 2021; 18 2021; 139 2022; 12 2022; 13 2019 2022; 2 2020; 21 Wang Y (e_1_2_16_96_1) 2022; 2210 Schulman J (e_1_2_16_35_1) 2017; 1707 Zeng A (e_1_2_16_99_1) 2022; 2204 e_1_2_16_46_1 Wang X (e_1_2_16_28_1) 2022; 2203 e_1_2_16_69_1 e_1_2_16_42_1 e_1_2_16_65_1 Wei J (e_1_2_16_25_1) 2022; 2201 e_1_2_16_120_1 Rampášek L (e_1_2_16_87_1) 2022 Brown T (e_1_2_16_20_1) 2020; 33 Dosovitskiy A (e_1_2_16_23_1) 2020; 2010 Raffel C (e_1_2_16_41_1) 2020; 21 Madani A (e_1_2_16_57_1) 2020; 2004 Chowdhery A (e_1_2_16_6_1) 2022; 2204 Price W (e_1_2_16_119_1) 2019; 33 Vaswani A (e_1_2_16_15_1) 2017; 30 e_1_2_16_30_1 e_1_2_16_53_1 e_1_2_16_76_1 e_1_2_16_105_1 Hoffmann J (e_1_2_16_5_1) 2022; 2203 Arora S (e_1_2_16_29_1) 2022; 2210 Dwivedi VP (e_1_2_16_86_1) 2020 Huang H (e_1_2_16_75_1) 2022 e_1_2_16_109_1 e_1_2_16_95_1 e_1_2_16_72_1 e_1_2_16_91_1 e_1_2_16_9_1 Chen D (e_1_2_16_88_1) 2022; 162 Wu C (e_1_2_16_102_1) 2023; 2303 Naughton J. (e_1_2_16_124_1) 2017; 9 Radford A (e_1_2_16_62_1) 2021; 139 e_1_2_16_68_1 e_1_2_16_118_1 Brohan A (e_1_2_16_101_1) 2022; 2212 e_1_2_16_60_1 Bai Y (e_1_2_16_33_1) 2022; 2212 e_1_2_16_114_1 Ramesh A (e_1_2_16_61_1) 2021; 139 Verkuil R (e_1_2_16_52_1) 2022; 12 Dong L (e_1_2_16_38_1) 2019; 32 e_1_2_16_121_1 Wang B (e_1_2_16_8_1) 2021; 2110 Qiu S (e_1_2_16_81_1) 2022 Bian Z (e_1_2_16_113_1) 2021; 2110 Kojima T (e_1_2_16_27_1) 2022; 2205 e_1_2_16_14_1 e_1_2_16_56_1 e_1_2_16_37_1 e_1_2_16_98_1 Mialon G (e_1_2_16_89_1) 2021; 2106 Driess D (e_1_2_16_112_1) 2023; 2303 e_1_2_16_10_1 e_1_2_16_125_1 e_1_2_16_106_1 Alon U (e_1_2_16_84_1) 2020; 2006 e_1_2_16_4_1 Alayrac J‐B (e_1_2_16_64_1) 2022; 2204 Kazerouni A (e_1_2_16_12_1) 2022; 2211 Singhal K (e_1_2_16_47_1) 2022; 2212 Wang C (e_1_2_16_71_1) 2017; 2017 e_1_2_16_48_1 Glaese A (e_1_2_16_32_1) 2022; 2209 e_1_2_16_44_1 Chen X (e_1_2_16_94_1) 2022 Bai Y (e_1_2_16_34_1) 2022; 2204 e_1_2_16_115_1 e_1_2_16_40_1 e_1_2_16_82_1 e_1_2_16_122_1 Devlin J (e_1_2_16_19_1) 2018; 1810 Huang K (e_1_2_16_43_1) 2019; 1904 Gilmer J (e_1_2_16_80_1) 2017; 70 O'Shea K (e_1_2_16_18_1) 2015; 151511 Lester B (e_1_2_16_21_1) 2021; 2104 Huang S (e_1_2_16_111_1) 2023 e_1_2_16_17_1 e_1_2_16_36_1 e_1_2_16_59_1 e_1_2_16_78_1 e_1_2_16_55_1 e_1_2_16_74_1 e_1_2_16_103_1 e_1_2_16_97_1 e_1_2_16_70_1 e_1_2_16_51_1 e_1_2_16_107_1 Heinzinger M (e_1_2_16_39_1) 2019; 19 e_1_2_16_93_1 Ouyang L (e_1_2_16_31_1) 2022; 2203 e_1_2_16_110_1 Taylor R (e_1_2_16_116_1) 2022; 2211 Rajbhandari S (e_1_2_16_126_1) 2022 e_1_2_16_3_1 Mikolov T (e_1_2_16_22_1) 2013; 1301 Bommasani R (e_1_2_16_7_1) 2021; 2108 Liévin V (e_1_2_16_45_1) 2022; 2207 Wang M (e_1_2_16_79_1) 2019; 1909 e_1_2_16_24_1 Ho J (e_1_2_16_11_1) 2020; 33 Ahn M (e_1_2_16_100_1) 2022; 2204 e_1_2_16_2_1 Doersch C (e_1_2_16_13_1) 2016; 1606 Perez E (e_1_2_16_117_1) 2022 e_1_2_16_123_1 Liu W (e_1_2_16_49_1) 2022 Gao Z (e_1_2_16_90_1) 2022; 2202 Yun S (e_1_2_16_85_1) 2019; 32 e_1_2_16_16_1 e_1_2_16_58_1 e_1_2_16_77_1 e_1_2_16_104_1 Zhang Z (e_1_2_16_26_1) 2023; 2302 e_1_2_16_54_1 Eslami S (e_1_2_16_66_1) 2021; 2112 e_1_2_16_73_1 e_1_2_16_108_1 Zhang Y (e_1_2_16_67_1) 2022; 182 e_1_2_16_92_1 e_1_2_16_50_1 Xu K (e_1_2_16_83_1) 2021; 139 Jia C (e_1_2_16_63_1) 2021; 139 |
References_xml | – volume: 139 start-page: 8748 year: 2021 end-page: 8763 article-title: Learning transferable visual models from natural language supervision publication-title: Proc Mach Learn Res – volume: 2201 year: 2022 article-title: Chain of thought prompting elicits reasoning in large language models publication-title: arXiv – volume: 24 issue: 2 year: 2023 article-title: MHTAN‐DTI: metapath‐based hierarchical transformer and attention network for drug–target interaction prediction publication-title: Brief Bioinform – year: 2022 article-title: Red teaming language models with language models publication-title: arXiv – volume: 2205 year: 2022 article-title: Large language models are zero‐shot reasoners publication-title: arXiv – volume: 3 start-page: e115 issue: 2 year: 2021 end-page: e123 article-title: Ethical issues in using ambient intelligence in health‐care settings publication-title: Lancet Digit Health – volume: 6 start-page: 21 issue: 3 year: 2006 end-page: 45 article-title: Ensemble based systems in decision making publication-title: IEEE Circuits Syst Mag – volume: 19 year: 2019 article-title: Modeling the language of life–deep learning protein sequences publication-title: Biorxiv – volume: 2106 year: 2021 article-title: Graphit: encoding graph structure in transformers publication-title: arXiv – volume: 12 issue: 1 year: 2022 article-title: Prediction of postoperative cardiac events in multiple surgical cohorts using a multimodal and integrative decision support system publication-title: Sci Rep – volume: 2203 year: 2022 article-title: Training compute‐optimal large language models publication-title: arXiv – volume: 585 start-page: 193 issue: 7824 year: 2020 end-page: 202 article-title: Illuminating the dark spaces of healthcare with ambient intelligence publication-title: Nature – volume: 1606 year: 2016 article-title: Tutorial on variational autoencoders publication-title: arXiv – volume: 12 year: 2022 article-title: Language models generalize beyond natural proteins publication-title: bioRxiv – volume: 2204 year: 2022 article-title: Do as I can, not as I say: grounding language in robotic affordances publication-title: arXiv – volume: 37 start-page: 2112 issue: 15 year: 2021 end-page: 2120 article-title: DNABERT: pre‐trained bidirectional encoder representations from transformers model for DNA‐language in genome publication-title: Bioinformatics – year: 2022 – volume: 2010 year: 2020 article-title: An image is worth 16 × 16 words: transformers for image recognition at scale publication-title: arXiv – start-page: 101 year: 2022 end-page: 117 – volume: 151511 year: 2015 article-title: An introduction to convolutional neural networks publication-title: arXiv – volume: 36 start-page: 1234 issue: 4 year: 2020 end-page: 1240 article-title: BioBERT: a pre‐trained biomedical language representation model for biomedical text mining publication-title: Bioinformatics – volume: 23 issue: 2 year: 2022 article-title: AlphaFold2‐aware protein–DNA binding site prediction using graph transformer publication-title: Brief Bioinform – year: 2019 – volume: 377 start-page: 904 issue: 10 year: 2017 end-page: 906 article-title: The HITECH era and the path forward publication-title: N Engl J Med – volume: 65 start-page: 10691 issue: 15 year: 2022 end-page: 10706 article-title: Boosting protein–ligand binding pose prediction and virtual screening based on residue–atom distance likelihood potential and graph transformer publication-title: J Med Chem – volume: 269 start-page: 1059 issue: 6 year: 2019 end-page: 1063 article-title: Validating the electronic cardiac arrest risk triage (eCART) score for risk stratification of surgical inpatients in the postoperative setting: retrospective cohort study publication-title: Ann Surg – volume: 11 issue: 1 year: 2020 article-title: Training confounder‐free deep learning models for medical applications publication-title: Nat Commun – volume: 2 issue: 1 year: 2022 article-title: Prediction of RNA–protein interactions using a nucleotide language model publication-title: Bioinform Adv – volume: 2204 year: 2022 article-title: Flamingo: a visual language model for few‐shot learning publication-title: arXiv – volume: 35 start-page: 53 issue: 1 year: 2018 end-page: 65 article-title: Generative adversarial networks: an overview publication-title: IEEE Signal Process Mag – volume: 1707 year: 2017 article-title: Proximal policy optimization algorithms publication-title: arXiv – volume: 162 start-page: 3469 year: 2022 end-page: 3489 article-title: Structure‐aware transformer for graph representation learning publication-title: Proc Mach Learn Res – volume: 2004 year: 2020 article-title: Progen: language modeling for protein generation publication-title: arXiv – start-page: 7 year: 2019 end-page: 10 – volume: 182 start-page: 2 year: 2022 end-page: 25 article-title: Contrastive learning of medical visual representations from paired images and text publication-title: Proc Mach Learn Res – year: 2022 article-title: Recipe for a general, powerful, scalable graph transformer publication-title: arXiv – volume: 2006 year: 2020 article-title: On the bottleneck of graph neural networks and its practical implications publication-title: arXiv – volume: 6 start-page: 230 issue: 2 year: 2022 end-page: 244 article-title: A survey of embodied AI: from simulators to research tasks publication-title: IEEE Trans Emerg Top Comput Intell – volume: 113 year: 2021 article-title: Multimodal tensor‐based method for integrative and continuous patient monitoring during postoperative cardiac care publication-title: Artif Intell Med – volume: 2303 year: 2023 article-title: PaLM‐E: an embodied multimodal language model publication-title: arXiv – volume: 4 start-page: 1256 issue: 12 year: 2022 end-page: 1264 article-title: Large‐scale chemical language representations capture molecular structure and properties publication-title: Nat Mach Intell – volume: 1810 year: 2018 article-title: Bert: pre‐training of deep bidirectional transformers for language understanding publication-title: arXiv – volume: 30 start-page: 1 year: 2017 end-page: 11 article-title: Attention is all you need publication-title: Adv Neural Inf Process Syst – volume: 2203 year: 2022 article-title: Training language models to follow instructions with human feedback publication-title: arXiv – volume: 15 start-page: 24 issue: 1 year: 2023 article-title: DrugEx v3: scaffold‐constrained drug design with graph transformer‐based reinforcement learning publication-title: J Cheminf – volume: 15 start-page: 1 issue: 1 year: 2008 end-page: 7 article-title: Early experiences with personal health records publication-title: J Am Med Inform Assoc – volume: 2207 year: 2022 article-title: Can large language models reason about medical questions? publication-title: arXiv – volume: 1301 year: 2013 article-title: Efficient estimation of word representations in vector space publication-title: arXiv – start-page: 447 year: 2022 end-page: 459 – volume: 2110 year: 2021 article-title: Colossal‐AI: a unified deep learning system for large‐scale parallel training publication-title: arXiv – start-page: 18332 year: 2022 end-page: 18346 article-title: Deepspeed‐moe: advancing mixture‐of‐experts inference and training to power next‐generation ai scale publication-title: Int Conf Mach Learn – volume: 2110 year: 2021 article-title: Pre‐trained language models in biomedical domain: a survey from multiscale perspective publication-title: arXiv – volume: 3 start-page: 115 issue: 1 year: 2020 article-title: CheXaid: deep learning assistance for physician diagnosis of tuberculosis using chest x‐rays in patients with HIV publication-title: NPJ Digit Med – volume: 2206 year: 2023 article-title: Emergent abilities of large language models publication-title: arXiv – volume: 2017 start-page: 1 year: 2017 end-page: 8 article-title: Multimodal gait analysis based on wearable inertial and microphone sensors publication-title: IEEE – volume: 26 start-page: 6070 issue: 12 year: 2022 end-page: 6080 article-title: Multi‐modal understanding and generation for medical images and text via vision‐language pre‐training publication-title: IEEE J Biomed Health Inform – volume: 12 start-page: 1709 issue: 11 year: 2022 article-title: GOProFormer: a multi‐modal transformer method for gene ontology protein function prediction publication-title: Biomolecules – volume: 32 start-page: 1 year: 2019 end-page: 13 article-title: Unified language model pre‐training for natural language understanding and generation publication-title: Adv Neural Inf Process Syst – volume: 38 start-page: 2102 issue: 8 year: 2022 end-page: 2110 article-title: ProteinBERT: a universal deep‐learning model of protein sequence and function publication-title: Bioinformatics – volume: 2 issue: 7 year: 2020 article-title: Robotics, smart wearable technologies, and autonomous intelligent systems for healthcare during the COVID‐19 pandemic: an analysis of the state of the art and future vision publication-title: Adv Intell Syst – year: 2021 – volume: 2112 year: 2021 article-title: Does clip benefit visual question answering in the medical domain as much as it does in the general domain? publication-title: arXiv – volume: 139 start-page: 11592 year: 2021 end-page: 11602 article-title: Optimization of graph neural networks: implicit acceleration by skip connections and more depth publication-title: Proc Mach Learn Res – volume: 33 start-page: 1877 year: 2020 end-page: 1901 article-title: Language models are few‐shot learners publication-title: Adv Neural Inf Process Syst – volume: 2204 year: 2022 article-title: Training a helpful and harmless assistant with reinforcement learning from human feedback publication-title: arXiv – volume: 2204 year: 2022 article-title: PaLM: scaling language modeling with pathways publication-title: arXiv – volume: 32 start-page: 1 year: 2019 end-page: 11 article-title: Graph transformer networks publication-title: Adv Neural Inf Process Syst – volume: 9 start-page: 9 year: 2017 article-title: Giving Google our private NHS data is simply illegal publication-title: The Guardian – volume: 2104 year: 2021 article-title: The power of scale for parameter‐efficient prompt tuning publication-title: arXiv – volume: 21 start-page: 5485 issue: 1 year: 2020 end-page: 5551 article-title: Exploring the limits of transfer learning with a unified text‐to‐text transformer publication-title: J Mach Learn Res – volume: 2302 year: 2023 article-title: Multimodal chain‐of‐thought reasoning in language models publication-title: arXiv – volume: 139 start-page: 4904 year: 2021 end-page: 4916 article-title: Scaling up visual and vision‐language representation learning with noisy text supervision publication-title: Proc Mach Learn Res – volume: 129 start-page: 781 issue: 7 year: 2022 end-page: 791 article-title: Policy‐driven, multimodal deep learning for predicting visual fields from the optic disc and OCT imaging publication-title: Ophthalmology – volume: 2210 year: 2022 article-title: MechRetro is a chemical‐mechanism‐driven graph learning framework for interpretable retrosynthesis prediction and pathway planning publication-title: arXiv – volume: 2211 year: 2022 article-title: Galactica: a large language model for science publication-title: arXiv – volume: 2212 year: 2022 article-title: Constitutional AI: harmlessness from AI feedback publication-title: arXiv – volume: 70 start-page: 1263 year: 2017 end-page: 1272 article-title: Neural message passing for quantum chemistry publication-title: Proc Mach Learn Res – volume: 33 start-page: 65 year: 2019 article-title: Medical AI and contextual bias publication-title: Harv JL & Tech – year: 2022 article-title: DProQ: a gated‐graph transformer for protein complex structure assessment publication-title: bioRxiv – volume: 2211 year: 2022 article-title: Diffusion models for medical image analysis: a comprehensive survey publication-title: arXiv – volume: 2 issue: 2 year: 2023 article-title: Performance of ChatGPT on USMLE: potential for AI‐assisted medical education using large language models publication-title: PLOS Digit Health – volume: 2202 year: 2022 article-title: AlphaDesign: a graph protein design method and benchmark on AlphaFoldDB publication-title: arXiv – start-page: 665 year: 2022 end-page: 674 – volume: 2108 year: 2021 article-title: On the opportunities and risks of foundation models publication-title: arXiv – volume: 35 start-page: 35 issue: 1 year: 2021 end-page: 40 article-title: The WHO health alert: communicating a global pandemic with WhatsApp publication-title: J Bus Tech Commun – volume: 2209 year: 2022 article-title: Improving alignment of dialogue agents via targeted human judgements publication-title: arXiv – volume: 117 start-page: 489 issue: 3 year: 2014 end-page: 501 article-title: Human fall detection on embedded platform using depth maps and wireless accelerometer publication-title: Comput Methods Programs Biomed – volume: 2210 year: 2022 article-title: Ask me anything: a simple strategy for prompting language models publication-title: arXiv – volume: 139 start-page: 8821 year: 2021 end-page: 8831 article-title: Zero‐shot text‐to‐image generation publication-title: Proc Mach Learn Res – volume: 2303 year: 2023 article-title: Visual ChatGPT: talking, drawing and editing with visual foundation models publication-title: arXiv – volume: 33 start-page: 6840 year: 2020 end-page: 6851 article-title: Denoising diffusion probabilistic models publication-title: Adv Neural Inf Process Syst – volume: 34 start-page: 33 issue: 1 year: 2010 end-page: 45 article-title: From medical images to minimally invasive intervention: computer assistance for robotic surgery publication-title: Comput Med Imag Graph – volume: 4 start-page: 521 issue: 6 year: 2022 end-page: 532 article-title: Controllable protein design with language models publication-title: Nat Mach Intell – volume: 2204 year: 2022 article-title: Socratic models: composing zero‐shot multimodal reasoning with language publication-title: arXiv – volume: 1909 year: 2019 article-title: Deep graph library: a graph‐centric, highly‐performant package for graph neural networks publication-title: arXiv – year: 2023 article-title: Language is not all you need: aligning perception with language models publication-title: arXiv – volume: 13 start-page: 4348 issue: 1 year: 2022 article-title: ProtGPT2 is a deep unsupervised language model for protein design publication-title: Nat Commun – volume: 2203 year: 2022 article-title: Self‐consistency improves chain of thought reasoning in language models publication-title: arXiv – volume: 324 start-page: 1212 issue: 12 year: 2020 end-page: 1213 article-title: Geographic distribution of US cohorts used to train deep learning algorithms publication-title: JAMA – volume: 1 start-page: 57 year: 2020 end-page: 81 article-title: Graph neural networks: a review of methods and applications publication-title: AI open – volume: 2212 year: 2022 article-title: Rt‐1: robotics transformer for real‐world control at scale publication-title: arXiv – year: 2020 – year: 2023 – volume: 1904 year: 2019 article-title: Clinicalbert: modeling clinical notes and predicting hospital readmission publication-title: arXiv – volume: 20 start-page: 2033 issue: 7 year: 2020 article-title: Wearable cardiorespiratory monitoring employing a multimodal digital patch stethoscope: estimation of ECG, PEP, LVET and respiration using a 55 mm single‐lead ECG and phonocardiogram publication-title: Sensors – volume: 18 start-page: 1196 issue: 10 year: 2021 end-page: 1203 article-title: Effective gene expression prediction from sequence by integrating long‐range interactions publication-title: Nat Methods – year: 2020 article-title: A generalization of transformer networks to graphs publication-title: arXiv – volume: 2212 year: 2022 article-title: Large language models encode clinical knowledge publication-title: arXiv – volume: 9 start-page: 1735 issue: 8 year: 1997 end-page: 1780 article-title: Long short‐term memory publication-title: Neural Comput – volume: 596 start-page: 583 issue: 7873 year: 2021 end-page: 589 article-title: Highly accurate protein structure prediction with AlphaFold publication-title: Nature – volume: 126 year: 2022 article-title: AMMU: a survey of transformer‐based biomedical pretrained language models publication-title: J Biomed Inf – volume: 2212 year: 2022 ident: e_1_2_16_101_1 article-title: Rt‐1: robotics transformer for real‐world control at scale publication-title: arXiv – volume: 32 start-page: 1 year: 2019 ident: e_1_2_16_38_1 article-title: Unified language model pre‐training for natural language understanding and generation publication-title: Adv Neural Inf Process Syst – volume: 2203 year: 2022 ident: e_1_2_16_31_1 article-title: Training language models to follow instructions with human feedback publication-title: arXiv – volume: 2209 year: 2022 ident: e_1_2_16_32_1 article-title: Improving alignment of dialogue agents via targeted human judgements publication-title: arXiv – volume: 33 start-page: 6840 year: 2020 ident: e_1_2_16_11_1 article-title: Denoising diffusion probabilistic models publication-title: Adv Neural Inf Process Syst – volume: 33 start-page: 1877 year: 2020 ident: e_1_2_16_20_1 article-title: Language models are few‐shot learners publication-title: Adv Neural Inf Process Syst – volume: 30 start-page: 1 year: 2017 ident: e_1_2_16_15_1 article-title: Attention is all you need publication-title: Adv Neural Inf Process Syst – ident: e_1_2_16_121_1 doi: 10.1001/jama.2020.12067 – ident: e_1_2_16_69_1 doi: 10.1038/s41598-022-15496-w – volume: 2303 year: 2023 ident: e_1_2_16_112_1 article-title: PaLM‐E: an embodied multimodal language model publication-title: arXiv – volume: 2204 year: 2022 ident: e_1_2_16_34_1 article-title: Training a helpful and harmless assistant with reinforcement learning from human feedback publication-title: arXiv – ident: e_1_2_16_48_1 doi: 10.18653/v1/2020.emnlp-main.743 – ident: e_1_2_16_115_1 doi: 10.1007/978-1-4842-4470-8_2 – ident: e_1_2_16_105_1 – volume: 32 start-page: 1 year: 2019 ident: e_1_2_16_85_1 article-title: Graph transformer networks publication-title: Adv Neural Inf Process Syst – ident: e_1_2_16_107_1 doi: 10.1109/ICACCS54159.2022.9785166 – ident: e_1_2_16_65_1 doi: 10.1109/JBHI.2022.3207502 – volume: 2303 year: 2023 ident: e_1_2_16_102_1 article-title: Visual ChatGPT: talking, drawing and editing with visual foundation models publication-title: arXiv – volume: 1904 year: 2019 ident: e_1_2_16_43_1 article-title: Clinicalbert: modeling clinical notes and predicting hospital readmission publication-title: arXiv – ident: e_1_2_16_76_1 doi: 10.5220/0010341906590666 – volume: 2108 year: 2021 ident: e_1_2_16_7_1 article-title: On the opportunities and risks of foundation models publication-title: arXiv – year: 2022 ident: e_1_2_16_94_1 article-title: DProQ: a gated‐graph transformer for protein complex structure assessment publication-title: bioRxiv – volume: 2211 year: 2022 ident: e_1_2_16_12_1 article-title: Diffusion models for medical image analysis: a comprehensive survey publication-title: arXiv – ident: e_1_2_16_95_1 doi: 10.1186/s13321-023-00694-z – ident: e_1_2_16_78_1 doi: 10.1016/j.ophtha.2022.02.017 – ident: e_1_2_16_92_1 doi: 10.1021/acs.jmedchem.2c00991 – volume: 70 start-page: 1263 year: 2017 ident: e_1_2_16_80_1 article-title: Neural message passing for quantum chemistry publication-title: Proc Mach Learn Res – volume: 139 start-page: 4904 year: 2021 ident: e_1_2_16_63_1 article-title: Scaling up visual and vision‐language representation learning with noisy text supervision publication-title: Proc Mach Learn Res – volume: 2017 start-page: 1 year: 2017 ident: e_1_2_16_71_1 article-title: Multimodal gait analysis based on wearable inertial and microphone sensors publication-title: IEEE – volume: 2006 year: 2020 ident: e_1_2_16_84_1 article-title: On the bottleneck of graph neural networks and its practical implications publication-title: arXiv – ident: e_1_2_16_123_1 doi: 10.1197/jamia.M2562 – volume: 1606 year: 2016 ident: e_1_2_16_13_1 article-title: Tutorial on variational autoencoders publication-title: arXiv – ident: e_1_2_16_9_1 doi: 10.1016/j.jbi.2021.103982 – volume: 2210 year: 2022 ident: e_1_2_16_29_1 article-title: Ask me anything: a simple strategy for prompting language models publication-title: arXiv – volume: 182 start-page: 2 year: 2022 ident: e_1_2_16_67_1 article-title: Contrastive learning of medical visual representations from paired images and text publication-title: Proc Mach Learn Res – volume: 151511 year: 2015 ident: e_1_2_16_18_1 article-title: An introduction to convolutional neural networks publication-title: arXiv – volume: 19 year: 2019 ident: e_1_2_16_39_1 article-title: Modeling the language of life–deep learning protein sequences publication-title: Biorxiv – volume: 1909 year: 2019 ident: e_1_2_16_79_1 article-title: Deep graph library: a graph‐centric, highly‐performant package for graph neural networks publication-title: arXiv – year: 2022 ident: e_1_2_16_117_1 article-title: Red teaming language models with language models publication-title: arXiv – volume: 2110 year: 2021 ident: e_1_2_16_8_1 article-title: Pre‐trained language models in biomedical domain: a survey from multiscale perspective publication-title: arXiv – volume: 33 start-page: 65 year: 2019 ident: e_1_2_16_119_1 article-title: Medical AI and contextual bias publication-title: Harv JL & Tech – volume: 2202 year: 2022 ident: e_1_2_16_90_1 article-title: AlphaDesign: a graph protein design method and benchmark on AlphaFoldDB publication-title: arXiv – ident: e_1_2_16_60_1 doi: 10.1038/s42256-022-00580-7 – ident: e_1_2_16_122_1 doi: 10.1056/NEJMp1703370 – ident: e_1_2_16_37_1 – ident: e_1_2_16_2_1 – volume: 1810 year: 2018 ident: e_1_2_16_19_1 article-title: Bert: pre‐training of deep bidirectional transformers for language understanding publication-title: arXiv – ident: e_1_2_16_54_1 doi: 10.1093/bioinformatics/btac020 – volume: 2211 year: 2022 ident: e_1_2_16_116_1 article-title: Galactica: a large language model for science publication-title: arXiv – ident: e_1_2_16_42_1 doi: 10.1093/bioinformatics/btz682 – ident: e_1_2_16_16_1 doi: 10.1016/j.aiopen.2021.01.001 – volume: 2204 year: 2022 ident: e_1_2_16_64_1 article-title: Flamingo: a visual language model for few‐shot learning publication-title: arXiv – volume: 9 start-page: 9 year: 2017 ident: e_1_2_16_124_1 article-title: Giving Google our private NHS data is simply illegal publication-title: The Guardian – ident: e_1_2_16_68_1 doi: 10.1097/SLA.0000000000002665 – ident: e_1_2_16_82_1 doi: 10.1609/aaai.v33i01.33014602 – ident: e_1_2_16_44_1 – ident: e_1_2_16_74_1 doi: 10.1038/s41586-020-2669-y – ident: e_1_2_16_104_1 – volume: 139 start-page: 8748 year: 2021 ident: e_1_2_16_62_1 article-title: Learning transferable visual models from natural language supervision publication-title: Proc Mach Learn Res – ident: e_1_2_16_55_1 doi: 10.1038/s41586-021-03819-2 – start-page: 447 volume-title: MedDG: An Entity‐Centric Medical Consultation Dataset for Entity‐Aware Medical Dialogue Generation year: 2022 ident: e_1_2_16_49_1 – ident: e_1_2_16_125_1 doi: 10.1038/s41467-020-19784-9 – volume: 1707 year: 2017 ident: e_1_2_16_35_1 article-title: Proximal policy optimization algorithms publication-title: arXiv – volume: 139 start-page: 11592 year: 2021 ident: e_1_2_16_83_1 article-title: Optimization of graph neural networks: implicit acceleration by skip connections and more depth publication-title: Proc Mach Learn Res – ident: e_1_2_16_59_1 doi: 10.1145/3307339.3342186 – ident: e_1_2_16_14_1 doi: 10.1109/MSP.2017.2765202 – volume: 12 year: 2022 ident: e_1_2_16_52_1 article-title: Language models generalize beyond natural proteins publication-title: bioRxiv – volume: 2204 year: 2022 ident: e_1_2_16_99_1 article-title: Socratic models: composing zero‐shot multimodal reasoning with language publication-title: arXiv – volume: 21 start-page: 5485 issue: 1 year: 2020 ident: e_1_2_16_41_1 article-title: Exploring the limits of transfer learning with a unified text‐to‐text transformer publication-title: J Mach Learn Res – ident: e_1_2_16_114_1 doi: 10.1145/3394486.3406703 – ident: e_1_2_16_91_1 doi: 10.3390/biom12111709 – volume: 2203 year: 2022 ident: e_1_2_16_28_1 article-title: Self‐consistency improves chain of thought reasoning in language models publication-title: arXiv – start-page: 18332 year: 2022 ident: e_1_2_16_126_1 article-title: Deepspeed‐moe: advancing mixture‐of‐experts inference and training to power next‐generation ai scale publication-title: Int Conf Mach Learn – ident: e_1_2_16_120_1 doi: 10.1016/S2589-7500(20)30275-2 – volume: 2207 year: 2022 ident: e_1_2_16_45_1 article-title: Can large language models reason about medical questions? publication-title: arXiv – ident: e_1_2_16_30_1 – volume: 2212 year: 2022 ident: e_1_2_16_47_1 article-title: Large language models encode clinical knowledge publication-title: arXiv – volume: 2112 year: 2021 ident: e_1_2_16_66_1 article-title: Does clip benefit visual question answering in the medical domain as much as it does in the general domain? publication-title: arXiv – volume: 2204 year: 2022 ident: e_1_2_16_100_1 article-title: Do as I can, not as I say: grounding language in robotic affordances publication-title: arXiv – volume: 2110 year: 2021 ident: e_1_2_16_113_1 article-title: Colossal‐AI: a unified deep learning system for large‐scale parallel training publication-title: arXiv – volume: 162 start-page: 3469 year: 2022 ident: e_1_2_16_88_1 article-title: Structure‐aware transformer for graph representation learning publication-title: Proc Mach Learn Res – ident: e_1_2_16_51_1 doi: 10.1038/s41592-021-01252-x – ident: e_1_2_16_17_1 doi: 10.1162/neco.1997.9.8.1735 – volume: 2203 year: 2022 ident: e_1_2_16_5_1 article-title: Training compute‐optimal large language models publication-title: arXiv – ident: e_1_2_16_70_1 doi: 10.1016/j.cmpb.2014.09.005 – ident: e_1_2_16_53_1 doi: 10.1038/s42256-022-00499-z – volume: 2004 year: 2020 ident: e_1_2_16_57_1 article-title: Progen: language modeling for protein generation publication-title: arXiv – start-page: 665 volume-title: Personalized Diagnostic Tool for Thyroid Cancer Classification using Multi‐View Ultrasound year: 2022 ident: e_1_2_16_75_1 – ident: e_1_2_16_10_1 – volume: 2212 year: 2022 ident: e_1_2_16_33_1 article-title: Constitutional AI: harmlessness from AI feedback publication-title: arXiv – ident: e_1_2_16_56_1 doi: 10.1038/s41467-022-32007-7 – ident: e_1_2_16_93_1 doi: 10.1093/bib/bbab564 – ident: e_1_2_16_24_1 – ident: e_1_2_16_46_1 doi: 10.1371/journal.pdig.0000198 – volume: 1301 year: 2013 ident: e_1_2_16_22_1 article-title: Efficient estimation of word representations in vector space publication-title: arXiv – volume: 2302 year: 2023 ident: e_1_2_16_26_1 article-title: Multimodal chain‐of‐thought reasoning in language models publication-title: arXiv – ident: e_1_2_16_110_1 doi: 10.1016/j.compmedimag.2009.07.007 – volume: 2010 year: 2020 ident: e_1_2_16_23_1 article-title: An image is worth 16 × 16 words: transformers for image recognition at scale publication-title: arXiv – volume: 2204 year: 2022 ident: e_1_2_16_6_1 article-title: PaLM: scaling language modeling with pathways publication-title: arXiv – ident: e_1_2_16_40_1 doi: 10.1093/bioadv/vbac023 – ident: e_1_2_16_108_1 – volume: 139 start-page: 8821 year: 2021 ident: e_1_2_16_61_1 article-title: Zero‐shot text‐to‐image generation publication-title: Proc Mach Learn Res – ident: e_1_2_16_103_1 doi: 10.1002/aisy.202000071 – volume: 2106 year: 2021 ident: e_1_2_16_89_1 article-title: Graphit: encoding graph structure in transformers publication-title: arXiv – ident: e_1_2_16_50_1 doi: 10.1093/bioinformatics/btab083 – ident: e_1_2_16_72_1 doi: 10.1016/j.artmed.2021.102032 – ident: e_1_2_16_3_1 – ident: e_1_2_16_4_1 doi: 10.48550/arXiv.2206.07682 – ident: e_1_2_16_77_1 doi: 10.1038/s41746-020-00322-2 – year: 2023 ident: e_1_2_16_111_1 article-title: Language is not all you need: aligning perception with language models publication-title: arXiv – ident: e_1_2_16_97_1 doi: 10.1093/bib/bbad079 – volume: 2205 year: 2022 ident: e_1_2_16_27_1 article-title: Large language models are zero‐shot reasoners publication-title: arXiv – ident: e_1_2_16_73_1 doi: 10.3390/s20072033 – year: 2020 ident: e_1_2_16_86_1 article-title: A generalization of transformer networks to graphs publication-title: arXiv – volume: 2201 year: 2022 ident: e_1_2_16_25_1 article-title: Chain of thought prompting elicits reasoning in large language models publication-title: arXiv – start-page: 101 volume-title: Optimizing Sparse Matrix Multiplications for Graph Neural Networks year: 2022 ident: e_1_2_16_81_1 – ident: e_1_2_16_109_1 doi: 10.1109/IROS51168.2021.9636627 – year: 2022 ident: e_1_2_16_87_1 article-title: Recipe for a general, powerful, scalable graph transformer publication-title: arXiv – ident: e_1_2_16_98_1 doi: 10.1109/MCAS.2006.1688199 – ident: e_1_2_16_36_1 doi: 10.1109/TETCI.2022.3141105 – volume: 2210 year: 2022 ident: e_1_2_16_96_1 article-title: MechRetro is a chemical‐mechanism‐driven graph learning framework for interpretable retrosynthesis prediction and pathway planning publication-title: arXiv – ident: e_1_2_16_58_1 – ident: e_1_2_16_106_1 doi: 10.1177/1050651920958507 – volume: 2104 year: 2021 ident: e_1_2_16_21_1 article-title: The power of scale for parameter‐efficient prompt tuning publication-title: arXiv – ident: e_1_2_16_118_1 |
SSID | ssj0002873272 |
Score | 2.530948 |
SecondaryResourceType | review_article |
Snippet | Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it... Abstract Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks.... |
SourceID | doaj proquest crossref wiley |
SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
SubjectTerms | Artificial intelligence Biomedical research Chatbots ChatGPT Clinical outcomes Datasets Deep learning Electronic health records GPT‐4 healthcare Language Large language models medicine Neural networks Workloads |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LSwMxEA7Sg3gRn1hf5FC8rU2TzSY5Vml9QMWDQm8hT4voVm29-xP8jf4Sk-y2tIh48bKH3QwMM5OdmWTmGwBaxjqHvFCZLYjKcmNYpoxFGXUaOVEoTFRsTh7cFJf3-fWQDhdGfcWasAoeuBJcW1tvCI_HHarIuTDccewNMppqRoTX8e8bfN5CMvWYjowYwQxXXbIRZbT97Dw-zcmS-0ko_Uuh5WKAmjxMfwOs16Eh7FYsbYIVV26B1UF9-b0NXrvGBCcRVVY-wBC4wRnWQ5AtHHt4PlLTi9s7qEoLU2MVfIp13l8fn5OgCQe7VzANvplEwjGsOu-jkmAN-TNKpKN5SdgOuO_37s4vs3piQmZIHMqmdTB6JcI2VRopRLwP-acIj9zmLKQqjlPqvGLCUIpVx2JsORLcFt4JXChCdkGjHJduD0DPmaPGFMirQC2sQtRawTohvfJUd3wTnMwEKU0NJx6nWjzJCggZyyhxmZMmgPOFLxWCxs8lZ1ET888R8jq9CIYga0OQfxlCExzO9CjrfTiRBIkI2Y8pb4JW0u1vPMhBr49zsv8frByAtTiVvqooOwSN6du7Owqxy1QfJzP9BkaF8Dk priority: 102 providerName: Directory of Open Access Journals – databaseName: Wiley Online Library Open Access dbid: 24P link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELagSIgL4im2LciHiluo16_Yx6XqUpAW9dBKvVnjV_dQstDd3vkJ_EZ-ST1ONrRCSFxySDxS5C8Tz9gz30fIQYgpsWyhiVpAI0NoGwiRNSp5lqwGLgCbkxdf9cm5_HKhLu5IffX8EOOGG3pG_V-jg4NfH_4hDf2WMv8gxUPyCBtrsZqPy9Nxe6UkAoJX6SbeatvoEif0LbNofTjY3luLKmX_vTjzbrRal5v5M_J0iBPprAf2OXmQuhfk8WI4CX9JfsxCKCsG4tdd0hLF0S3xQ5lousr0aAmbT6dnFLpIa5cVvcKi798_f60LLInOPtOqgrNGwxXt2_ARMTrw_yyr6XKsD3tFzufHZ0cnzSCf0ASBCm3eFw8AW3wWPAMmci7JqC0XGWVb8pZklEoZWhuU4jCNnEfDrIk6J8s1CPGa7HSrLr0hNJs2qRA0y1CsbQSmYrTttORaWflpnpD324l0YeAWR4mLK9ezInOHM-6kmBA6Dvze02n8PeQjIjE-Rv7remN1fekGd3I-5iAMboKBlsYGkwzPgQWvfCts9hOyv8XRDU65doJZ5O_nykzIQcX2X-_gFsdzLsXu_w3bI09QhL4vINsnO5vrm_S2hCob_65-lLd9H-dX priority: 102 providerName: Wiley-Blackwell |
Title | Accelerating the integration of ChatGPT and other large‐scale AI models into biomedical research and healthcare |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fmef2.43 https://www.proquest.com/docview/3090875258 https://doaj.org/article/bdfc380679a6489c8e82fc0cb5b739fb |
Volume | 2 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LbxMxELZoKyEuiKcIlMiHittSx49d-4TSKqEgpYpQK-Vm-dkc2t22Sa8VP4HfyC_B43UCFYKLD7v2auVvbM-MZ75B6MD5EEhUpvI1MxV3rqmM86QSwZKgakOZgeTk2Wl9cs6_LsSiONxWJaxysyfmjdp3Dnzkh4woIF-nQn66vqmgahTcrpYSGjtoD6jLQKqbRbP1sSRrgNGG9rmywDV6eBUi_cjZg0Moc_U_UDD_VFPzOTN9hp4WBRGPe0Sfo0ehfYEez8oV-Et0M3YuHRUAXHuBk_qGN4wPaYZxF_Hx0qw_z8-waT3O6VX4EqK9f37_sUp4BDz-gnP5mxUM7HCffw9Q4UL8s8xDl9vAsFfofDo5Oz6pSt2EyjEozWZtEn2j0mI1lhjCYkxWqEoN97xJBkuQQoRoGuWEoGbkKfWSKOnrGBStDWOv0W7bteENwlE2QThXk2jSaOUNEd6rZpSMrCjsKA7Qh81EaldIxaG2xaXu6ZCphhnXnA0Q3na87nk0_u5yBEhsXwPxdX7Q3V7oso609dExCd4vU3OpnAySRkecFbZhKtoB2t_gqMtqXOnfsjNABxnbf_2Dnk2mlLO3___KO_QEqs73EWP7aHd9exfeJ91kbYdoh_L5MIvhEO0dTU7n34bZzk_t7H7yC7kE614 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtQwELaqIgGXil-xtIAPhVuo144T-1ChpXTZpd2Kw1bam_Fv91CStrsI9cYj8CR9qD4JHidZqBDceskhsaPI88Xz45n5ENq2znsSpM5cwXSWW1tm2jqScW-Il4WmTENx8uSoGB3nn2Z8toauuloYSKvs9sS0UbvaQox8hxEJzdcpF-_OzjNgjYLT1Y5Co4HFgb_8Hl22xe74Q5Tva0qH-9O9UdayCmSWAXGZMREYWkYoa0M0YSFEH03GS-7yMprzXnDugy6l5ZzqvqPUCSKFK4KXtNAQAI1b_p2oeAk4e-WsXMV0ovfBaEmb2lzobbrz1Qf6Nmc3lF7iBrhh0P5pFie9NnyANlqDFA8aBD1Ea756hO5O2iP3x-h8YG1UTQCU6gRHcxF3HSaiRHEd8N5cLz9-nmJdOZzKufApZJdf__i5iPL3eDDGiW5nARNr3NT7AzRw22honqbOV4loT9DxrazoU7Re1ZV_hnAQpefWFiToOFs6TbhzsuxHpy5w0w899KZbSGXbJubApXGqmvbLVMGKq5z1EF4NPGv6dvw95D1IYvUYGm2nG_XFiWr_W2VcsExAtE0XuZBWeEGDJdZwUzIZTA9tdXJU7d-_UL-x2kPbSbb_-gY12R_SnD3__1teoXuj6eRQHY6PDjbRfWC8b7LVttD68uKbfxHtoqV5mcCI0ZfbRv8vKEAiNw |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07bxQxEB5FQYpoEE9xIYCLQLfE59euC4ojyZEjXHRFIqUzfuaKsBdyF0V0_AT-Bn-LX4LtfUCEkGjSbLFrS5Znxv5mduYbgG3rvMdB6sIJqgtmbVlo63DBvcFeCk2oTsXJ0yNxcMI-nPLTNfjR1cI0_BB9wC1ZRj6vk4FfuLDzmzT0sw_kDev6Vh_6r9fRW1u-nexF0b4iZLx_vHtQtA0FCktTzzJjok5oGbVYG6wxDSG6ZzI-mGNlRPK-4twHXUrLOdFDR4irsKycCF4SoVPsM572d9KvxZQ9RtisD-dEx4OS3CqKlEIWIuKSpkQ3rXanXeuNuy-3CLiBa_9Ex_l6G9-Hey0uRaNGkR7Amq8fwsa0_fP-CL6MrI03VNKX-gxF1Ig6ookoWLQIaHeuV-9nx0jXDuWqLnSeksx_fvu-jGrg0WiCctedZZq4QE3Zf9IQ1PINzfPUeZ-P9hhObmV3n8B6vaj9U0ChKj23VuCg42zpNObOyXIYfbvAzTAM4HW3kcq2XOappca5aliYiUo7rhgdAOoHXjT0HX8PeZck0X9OfNv5xeLyTLXmq4wLllYp6KYFq6StfEWCxdZwU1IZzAC2Ojmq9hBYKopl6hdAeDWA7Szbf61BTffHhNHN_xv2EjZme2P1cXJ0-Azukoi6mty1LVhfXV755xElrcyLrJ8IPt22QfwCSOcing |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Accelerating+the+integration+of+ChatGPT+and+other+large%E2%80%90scale+AI+models+into+biomedical+research+and+healthcare&rft.jtitle=MedComm+-+Future+medicine&rft.au=Wang%2C+Ding%E2%80%90Qiao&rft.au=Feng%2C+Long%E2%80%90Yu&rft.au=Ye%2C+Jin%E2%80%90Guo&rft.au=Zou%2C+Jin%E2%80%90Gen&rft.date=2023-06-01&rft.pub=John+Wiley+%26+Sons%2C+Inc&rft.eissn=2769-6456&rft.volume=2&rft.issue=2&rft_id=info:doi/10.1002%2Fmef2.43 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2769-6456&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2769-6456&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2769-6456&client=summon |