Explainable automated coding of clinical notes using hierarchical label-wise attention networks and label embedding initialisation
[Display omitted] •Explainable automated medical coding through attention-based deep learning.•The model highlights key words and sentences for each code/label.•Label embedding initialisation can enhance deep learning for multi-label classification.•Formal comparison to major deep learning models sh...
Saved in:
Published in | Journal of biomedical informatics Vol. 116; p. 103728 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Inc
01.04.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 1532-0464 1532-0480 1532-0480 |
DOI | 10.1016/j.jbi.2021.103728 |
Cover
Loading…
Abstract | [Display omitted]
•Explainable automated medical coding through attention-based deep learning.•The model highlights key words and sentences for each code/label.•Label embedding initialisation can enhance deep learning for multi-label classification.•Formal comparison to major deep learning models showing better or comparable performance.•Discussion with future studies on potentially deploying models in practice.
Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance.
To address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
HLAN achieved the best Micro-level AUC and F1 on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F1. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding.
We draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies. |
---|---|
AbstractList | Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance.
To address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
HLAN achieved the best Micro-level AUC and F
on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F
. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding.
We draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies. [Display omitted] •Explainable automated medical coding through attention-based deep learning.•The model highlights key words and sentences for each code/label.•Label embedding initialisation can enhance deep learning for multi-label classification.•Formal comparison to major deep learning models showing better or comparable performance.•Discussion with future studies on potentially deploying models in practice. Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance. To address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). HLAN achieved the best Micro-level AUC and F1 on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F1. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding. We draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies. Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance.BACKGROUNDDiagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance.To address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).METHODSTo address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).HLAN achieved the best Micro-level AUC and F1 on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F1. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding.RESULTSHLAN achieved the best Micro-level AUC and F1 on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F1. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding.We draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies.CONCLUSIONWe draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies. |
ArticleNumber | 103728 |
Author | Suárez-Paniagua, Víctor Wu, Honghan Dong, Hang Whiteley, William |
Author_xml | – sequence: 1 givenname: Hang surname: Dong fullname: Dong, Hang email: hang.dong@ed.ac.uk organization: Centre for Medical Informatics, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, United Kingdom – sequence: 2 givenname: Víctor surname: Suárez-Paniagua fullname: Suárez-Paniagua, Víctor organization: Centre for Medical Informatics, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, United Kingdom – sequence: 3 givenname: William surname: Whiteley fullname: Whiteley, William organization: Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom – sequence: 4 givenname: Honghan surname: Wu fullname: Wu, Honghan organization: Institute of Health Informatics, University College London, London, United Kingdom |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33711543$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kU1vFSEYhYmpsZ8_wI1h6WaufAzMGFemqa1JEze6Jgy8WK4MXIGxdesvl3undeGiK17gOSd5zzlFRzFFQOg1JRtKqHy33Wwnv2GE0XbnAxtfoBMqOOtIP5Kjf7Psj9FpKVtCKBVCvkLHnA9t7PkJ-nP1sAvaRz0FwHqpadYVLDbJ-vgdJ4dN8NEbHXBMFQpeyv79zkPW2dwdPoKeIHT3vjSDWiFWnyKOUO9T_lGwjnYlMMwT2INtc6xeB1_0nj1HL50OBS4ezzP07dPV18ub7vbL9efLj7ed4YLXbiTEDXIS1LBBypEwYa00mtIRuOsNZ5yaqacw9NYJ6iZDeU-dEZK95866kZ-ht6vvLqefC5SqZl8MhKAjpKUoJghlUoqeNPTNI7pMM1i1y37W-bd6yq0BwwqYnErJ4JTx9bBNzdoHRYnaN6S2qjWk9g2ptaGmpP8pn8yf03xYNdDi-dWyV8V4iAasz2Cqssk_o_4LvJ-qZQ |
CitedBy_id | crossref_primary_10_1038_s41746_022_00730_6 crossref_primary_10_1145_3664615 crossref_primary_10_1016_j_is_2025_102539 crossref_primary_10_1038_s41598_025_90780_z crossref_primary_10_2196_58278 crossref_primary_10_1016_j_compbiomed_2021_104998 crossref_primary_10_1016_j_artmed_2024_102916 crossref_primary_10_1016_j_eswa_2024_123519 crossref_primary_10_1186_s12911_021_01533_7 crossref_primary_10_1186_s12911_023_02181_9 crossref_primary_10_1002_med4_75 crossref_primary_10_1089_cmb_2023_0096 crossref_primary_10_1016_j_cmpb_2022_107161 crossref_primary_10_1016_j_knosys_2023_111113 crossref_primary_10_1038_s41746_024_01363_7 crossref_primary_10_1016_j_future_2022_09_021 crossref_primary_10_1016_j_artmed_2023_102662 crossref_primary_10_1016_j_jbi_2023_104323 crossref_primary_10_1109_MSP_2022_3155906 crossref_primary_10_1007_s00521_024_10437_2 crossref_primary_10_1016_j_jbi_2022_104161 crossref_primary_10_1016_j_eswa_2022_118997 crossref_primary_10_1016_j_csbj_2024_05_004 crossref_primary_10_1093_database_baac069 crossref_primary_10_1051_e3sconf_202452904014 crossref_primary_10_1016_j_artmed_2024_103041 crossref_primary_10_1038_s41746_022_00705_7 crossref_primary_10_1038_s41598_024_69214_9 crossref_primary_10_1109_RBME_2022_3185953 crossref_primary_10_1145_3587271 crossref_primary_10_1093_bjr_tqae056 crossref_primary_10_1145_3563041 |
Cites_doi | 10.18653/v1/2020.acl-main.282 10.18653/v1/N18-1100 10.1038/sdata.2016.35 10.18653/v1/W17-2339 10.1093/bioinformatics/btz682 10.18653/v1/W17-2342 10.1089/wound.2013.0478 10.1609/aimag.v38i3.2741 10.1109/TKDE.2013.39 10.1016/j.carj.2019.08.010 10.1016/j.patrec.2005.10.010 10.1371/journal.pone.0192360 10.1007/978-3-662-44851-9_28 10.3115/1572392.1572411 10.1136/jamia.2009.001024 10.1145/2716262 10.18653/v1/2020.bionlp-1.8 10.1109/TKDE.2006.162 10.18653/v1/D19-6220 10.18653/v1/2020.emnlp-main.607 10.18653/v1/N16-1174 |
ContentType | Journal Article |
Copyright | 2021 Elsevier Inc. Copyright © 2021 Elsevier Inc. All rights reserved. |
Copyright_xml | – notice: 2021 Elsevier Inc. – notice: Copyright © 2021 Elsevier Inc. All rights reserved. |
DBID | 6I. AAFTH AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
DOI | 10.1016/j.jbi.2021.103728 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Engineering Public Health |
EISSN | 1532-0480 |
ExternalDocumentID | 33711543 10_1016_j_jbi_2021_103728 S1532046421000575 |
Genre | Research Support, Non-U.S. Gov't Journal Article |
GeographicLocations | United Kingdom |
GeographicLocations_xml | – name: United Kingdom |
GrantInformation_xml | – fundername: Medical Research Council grantid: MR/S004149/1 – fundername: Medical Research Council grantid: MR/S004149/2 – fundername: Medical Research Council grantid: MC_PC_18029 – fundername: Chief Scientist Office grantid: SCAF/17/01 |
GroupedDBID | --- --K --M -~X .DC .GJ .~1 0R~ 1B1 1RT 1~. 1~5 29J 4.4 457 4G. 53G 5GY 5VS 6I. 7-5 71M 8P~ AACTN AAEDT AAEDW AAFTH AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAWTL AAXUO AAYFN ABBOA ABBQC ABFRF ABJNI ABLVK ABMAC ABMZM ABVKL ABXDB ABYKQ ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADFGL ADMUD AEBSH AEFWE AEKER AENEX AEXQZ AFKWA AFTJW AFXIZ AGHFR AGUBO AGYEJ AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV AJRQY ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ ANZVX AOUOD ASPBG AVWKF AXJTR AZFZN BAWUL BKOJK BLXMC BNPGV CAG COF CS3 DIK DM4 DU5 EBS EFBJH EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HVGLF HZ~ IHE IXB J1W KOM LCYCR LG5 M41 MO0 N9A NCXOZ O-L O9- OAUVE OK1 OZT P-8 P-9 PC. Q38 R2- RIG ROL RPZ SDF SDG SDP SES SEW SPC SPCBC SSH SSV SSZ T5K UAP UHS UNMZH XPP ZGI ZMT ZU3 ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACIEU ACRPL ACVFH ADCNI ADNMO ADVLN AEIPS AEUPX AFJKZ AFPUW AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION CGR CUY CVF ECM EFKBS EIF NPM 7X8 |
ID | FETCH-LOGICAL-c353t-800f76b51c27668025dd6ca118e3f4c3231cb41e74df51fbc1341fc56293fdf83 |
IEDL.DBID | AIKHN |
ISSN | 1532-0464 1532-0480 |
IngestDate | Thu Sep 04 17:34:31 EDT 2025 Mon Jul 21 05:35:59 EDT 2025 Tue Jul 01 04:12:08 EDT 2025 Thu Apr 24 22:51:58 EDT 2025 Fri Feb 23 02:39:58 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Deep learning Label correlation Multi-label classification Natural Language Processing Automated medical coding Attention Mechanisms Explainability |
Language | English |
License | This article is made available under the Elsevier license. Copyright © 2021 Elsevier Inc. All rights reserved. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c353t-800f76b51c27668025dd6ca118e3f4c3231cb41e74df51fbc1341fc56293fdf83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
OpenAccessLink | https://www.sciencedirect.com/science/article/pii/S1532046421000575 |
PMID | 33711543 |
PQID | 2501266540 |
PQPubID | 23479 |
ParticipantIDs | proquest_miscellaneous_2501266540 pubmed_primary_33711543 crossref_citationtrail_10_1016_j_jbi_2021_103728 crossref_primary_10_1016_j_jbi_2021_103728 elsevier_sciencedirect_doi_10_1016_j_jbi_2021_103728 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | April 2021 2021-04-00 20210401 |
PublicationDateYYYYMMDD | 2021-04-01 |
PublicationDate_xml | – month: 04 year: 2021 text: April 2021 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | Journal of biomedical informatics |
PublicationTitleAlternate | J Biomed Inform |
PublicationYear | 2021 |
Publisher | Elsevier Inc |
Publisher_xml | – name: Elsevier Inc |
References | Johnson, Pollard, Shen, Lehman, Feng, Ghassemi, Moody, Szolovits, Celi, Mark (b0010) 2016; 3 Lee, Yoon, Kim, Kim, Kim, So, Kang (b0175) 2019; 36 M. Falis, M. Pajak, A. Lisowska, P. Schrempf, L. Deckers, S. Mikhael, S. Tsaftaris, A. O’Neil, Ontological attention ensembles for capturing semantic concepts in ICD code prediction from clinical text, in: Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), Association for Computational Linguistics, Hong Kong, 2019, pp. 168–177. doi:10.18653/v1/D19-6220. T. Searle, Z. Ibrahim, R. Dobson, Experimental evaluation and development of a silver-standard for the MIMIC-III clinical coding dataset, in: Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, Association for Computational Linguistics, Online, 2020, pp. 76–85. doi:10.18653/v1/2020.bionlp-1.8. A. Stewart, ICD-11 contains nearly 4x as many codes as ICD-10: Here’s what WHO has to say, https://www.beckersasc.com/asc-coding-billing-and-collections/icd-11-contains-nearly-4x-as-many-codes-as-icd-10-here-s-what-who-has-to-say.html, accessed 2 April, 2020 (2018). Dong, Wang, Huang, Coenen (b0075) 2020 Stanfill, Williams, Fenton, Jenders, Hersh (b0025) 2010; 17 Y. Chen, Predicting ICD-9 codes from medical notes - does the magic of BERT applies here?, https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1204/reports/custom/report25.pdf, stanford CS224N Custom Project (Option 3) (2020). J. Nam, J. Kim, E. Loza Mencía, I. Gurevych, J. Fürnkranz, Large-scale multi-label text classification — revisiting neural networks, in: T. Calders, F. Esposito, E. Hüllermeier, R. Meo (Eds.), Machine Learning and Knowledge Discovery in Databases, Springer Berlin Heidelberg, Berlin, Heidelberg, 2014, pp. 437–452. D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). Song, Cheong, Yin, Cheung, Fung, Poon (b0140) 2019 Paszke, Gross, Massa, Lerer, Bradbury, Chanan, Killeen, Lin, Gimelshein, Antiga, Desmaison, Kopf, Yang, DeVito, Raison, Tejani, Chilamkurthy, Steiner, Fang, Bai, Chintala (b0170) 2019 Goodman, Flaxman (b0045) 2017; 38 E. Gibaja, S. Ventura, A tutorial on multilabel learning, ACM Computing Survey 47 (3) (2015) 52:1–52:38. Kurata, Xiang, Zhou (b0110) 2016 Devlin, Chang, Lee, Toutanova (b0085) 2019 T. Baumel, J. Nassour-Kassis, R. Cohen, M. Elhadad, N. Elhadad, Multi-label classification of patient notes: case study on ICD code assignment, in: Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 409–416. Zhang, Zhou (b0065) 2014; 26 L. v. d. Maaten, G. Hinton, Visualizing data using t-SNE, Journal of machine learning research 9 (Nov) (2008) 2579–2605. T. Fawcett, An introduction to roc analysis, Pattern Recognition Letters 27 (8) (2006) 861–874, rOC Analysis in Pattern Recognition. doi: 10.1016/j.patrec.2005.10.010. Cho, van Merrienboer, Gulcehre, Bahdanau, Bougares, Schwenk, Bengio (b0125) 2014 Rios, Kavuluru (b0185) 2018 J.P. Pestian, C. Brew, P. Matykiewicz, D.J. Hovermale, N. Johnson, K.B. Cohen, W. Duch, A shared task involving multi-label classification of clinical free text, in: Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, BioNLP ’07, Association for Computational Linguistics, USA, 2007, p. 97–104. T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, in: Advances in neural information processing systems, 2013, pp. 3111–3119. Cartwright (b0015) 2013; 2 J. Mullenbach, S. Wiegreffe, J. Duke, J. Sun, J. Eisenstein, Explainable prediction of medical codes from clinical text, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, 2018, pp. 1101–1111. doi:10.18653/v1/N18-1100. Kim (b0145) 2014 Gehrmann, Dernoncourt, Li, Carlson, Wu, Welt, Foote, Moseley, Grant, Tyler (b0150) 2018; 13 Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard, Kudlur, Levenberg, Monga, Moore, Murray, Steiner, Tucker, Vasudevan, Warden, Wicke, Yu, Zheng (b0165) 2016 I. Chalkidis, M. Fergadiotis, S. Kotitsas, P. Malakasiotis, N. Aletras, I. Androutsopoulos, An empirical study on large-scale multi-label text classification including few and zero-shot labels (2020). arXiv:2010.01653. Glorot, Bengio (b0160) 2010 R.M. Monarch, Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI, Shelter Island, NY: Manning Publications Company, 2021, version 11, MEAP Edition (Manning Early Access Program). Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, E. Hovy, Hierarchical attention networks for document classification, in: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016, pp. 1480–1489. S. Baker, A. Korhonen, Initializing neural networks for hierarchical multi-label text classification, in: BioNLP 2017, Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 307–315. doi:10.18653/v1/W17-2339. Geis, Brady, Wu, Spencer, Ranschaert, Jaremko, Langer, Kitts, Birch, Shields (b0040) 2019; 70 Tsoumakas, Katakis, Vlahavas (b0070) 2010 D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, in: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, 2015, pp. 1–15. S. Karimi, X. Dai, H. Hassanzadeh, A. Nguyen, Automatic diagnosis coding of radiology reports: A comparison of deep learning and conventional classification methods, in: BioNLP 2017, Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 328–332. doi:10.18653/v1/W17-2342. Zhang, Zhou (b0080) 2006; 18 P. Cao, Y. Chen, K. Liu, J. Zhao, S. Liu, W. Chong, HyperCore: Hyperbolic and co-graph representation for automatic ICD coding, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 3105–3114. doi:10.18653/v1/2020.acl-main.282. R. Řehuřek, P. Sojka, Software Framework for Topic Modelling with Large Corpora, in: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, ELRA, Valletta, Malta, 2010, pp. 45–50, http://is.muni.cz/publication/884893/en. Cho (10.1016/j.jbi.2021.103728_b0125) 2014 Song (10.1016/j.jbi.2021.103728_b0140) 2019 Paszke (10.1016/j.jbi.2021.103728_b0170) 2019 Cartwright (10.1016/j.jbi.2021.103728_b0015) 2013; 2 Zhang (10.1016/j.jbi.2021.103728_b0080) 2006; 18 10.1016/j.jbi.2021.103728_b0115 Goodman (10.1016/j.jbi.2021.103728_b0045) 2017; 38 10.1016/j.jbi.2021.103728_b0055 10.1016/j.jbi.2021.103728_b0155 10.1016/j.jbi.2021.103728_b0035 10.1016/j.jbi.2021.103728_b0135 Lee (10.1016/j.jbi.2021.103728_b0175) 2019; 36 Glorot (10.1016/j.jbi.2021.103728_b0160) 2010 10.1016/j.jbi.2021.103728_b0020 10.1016/j.jbi.2021.103728_b0120 Rios (10.1016/j.jbi.2021.103728_b0185) 2018 10.1016/j.jbi.2021.103728_b0180 10.1016/j.jbi.2021.103728_b0060 Kim (10.1016/j.jbi.2021.103728_b0145) 2014 Kurata (10.1016/j.jbi.2021.103728_b0110) 2016 Zhang (10.1016/j.jbi.2021.103728_b0065) 2014; 26 Abadi (10.1016/j.jbi.2021.103728_b0165) 2016 Dong (10.1016/j.jbi.2021.103728_b0075) 2020 10.1016/j.jbi.2021.103728_b0005 Geis (10.1016/j.jbi.2021.103728_b0040) 2019; 70 10.1016/j.jbi.2021.103728_b0105 10.1016/j.jbi.2021.103728_b0205 10.1016/j.jbi.2021.103728_b0100 10.1016/j.jbi.2021.103728_b0200 Stanfill (10.1016/j.jbi.2021.103728_b0025) 2010; 17 Tsoumakas (10.1016/j.jbi.2021.103728_b0070) 2010 10.1016/j.jbi.2021.103728_b0095 Johnson (10.1016/j.jbi.2021.103728_b0010) 2016; 3 10.1016/j.jbi.2021.103728_b0030 10.1016/j.jbi.2021.103728_b0195 10.1016/j.jbi.2021.103728_b0130 Devlin (10.1016/j.jbi.2021.103728_b0085) 2019 10.1016/j.jbi.2021.103728_b0190 10.1016/j.jbi.2021.103728_b0050 10.1016/j.jbi.2021.103728_b0090 Gehrmann (10.1016/j.jbi.2021.103728_b0150) 2018; 13 |
References_xml | – start-page: 4613 year: 2019 end-page: 4619 ident: b0140 article-title: Medical concept embedding with multiple ontological representations publication-title: IJCAI – volume: 2 start-page: 588 year: 2013 end-page: 592 ident: b0015 article-title: ICD-9-CM to ICD-10-CM codes: What? why? how? publication-title: Adv. Wound Care – start-page: 4171 year: 2019 end-page: 4186 ident: b0085 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding publication-title: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) – start-page: 1724 year: 2014 end-page: 1734 ident: b0125 article-title: Learning phrase representations using RNN encoder–decoder for statistical machine translation, in publication-title: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) – start-page: 265 year: 2016 end-page: 283 ident: b0165 article-title: Tensorflow: A system for large-scale machine learning publication-title: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation – start-page: 521 year: 2016 end-page: 526 ident: b0110 article-title: Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence publication-title: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies – reference: J.P. Pestian, C. Brew, P. Matykiewicz, D.J. Hovermale, N. Johnson, K.B. Cohen, W. Duch, A shared task involving multi-label classification of clinical free text, in: Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, BioNLP ’07, Association for Computational Linguistics, USA, 2007, p. 97–104. – start-page: 249 year: 2010 end-page: 256 ident: b0160 article-title: Understanding the difficulty of training deep feedforward neural networks, in publication-title: Proceedings of the thirteenth international conference on artificial intelligence and statistics – reference: L. v. d. Maaten, G. Hinton, Visualizing data using t-SNE, Journal of machine learning research 9 (Nov) (2008) 2579–2605. – reference: T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, in: Advances in neural information processing systems, 2013, pp. 3111–3119. – volume: 17 start-page: 646 year: 2010 end-page: 651 ident: b0025 article-title: A systematic literature review of automated clinical coding and classification systems publication-title: J. Am. Med. Inform. Assoc.: JAMIA – volume: 70 start-page: 329 year: 2019 end-page: 334 ident: b0040 article-title: Ethics of artificial intelligence in radiology: summary of the joint european and north american multisociety statement publication-title: Can. Assoc. Radiol. J. – reference: J. Nam, J. Kim, E. Loza Mencía, I. Gurevych, J. Fürnkranz, Large-scale multi-label text classification — revisiting neural networks, in: T. Calders, F. Esposito, E. Hüllermeier, R. Meo (Eds.), Machine Learning and Knowledge Discovery in Databases, Springer Berlin Heidelberg, Berlin, Heidelberg, 2014, pp. 437–452. – start-page: 1 year: 2020 end-page: 15 ident: b0075 article-title: Automated social text annotation with joint multilabel attention networks publication-title: IEEE Trans. Neural Networks Learn. Syst. – volume: 26 start-page: 1819 year: 2014 end-page: 1837 ident: b0065 article-title: A review on multi-label learning algorithms publication-title: IEEE Trans. Knowl. Data Eng. – reference: T. Fawcett, An introduction to roc analysis, Pattern Recognition Letters 27 (8) (2006) 861–874, rOC Analysis in Pattern Recognition. doi: 10.1016/j.patrec.2005.10.010. – reference: D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, in: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, 2015, pp. 1–15. – volume: 36 start-page: 1234 year: 2019 end-page: 1240 ident: b0175 article-title: BioBERT: a pre-trained biomedical language representation model for biomedical text mining publication-title: Bioinformatics – reference: J. Mullenbach, S. Wiegreffe, J. Duke, J. Sun, J. Eisenstein, Explainable prediction of medical codes from clinical text, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, 2018, pp. 1101–1111. doi:10.18653/v1/N18-1100. – reference: I. Chalkidis, M. Fergadiotis, S. Kotitsas, P. Malakasiotis, N. Aletras, I. Androutsopoulos, An empirical study on large-scale multi-label text classification including few and zero-shot labels (2020). arXiv:2010.01653. – start-page: 667 year: 2010 end-page: 685 ident: b0070 article-title: Mining multi-label data publication-title: Data Mining and Knowledge Discovery Handbook – start-page: 8024 year: 2019 end-page: 8035 ident: b0170 article-title: Pytorch: An imperative style, high-Performance deep learning library publication-title: Advances in Neural Information Processing Systems 32 – reference: T. Searle, Z. Ibrahim, R. Dobson, Experimental evaluation and development of a silver-standard for the MIMIC-III clinical coding dataset, in: Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, Association for Computational Linguistics, Online, 2020, pp. 76–85. doi:10.18653/v1/2020.bionlp-1.8. – reference: D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). – reference: Y. Chen, Predicting ICD-9 codes from medical notes - does the magic of BERT applies here?, https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1204/reports/custom/report25.pdf, stanford CS224N Custom Project (Option 3) (2020). – volume: 3 start-page: 1 year: 2016 end-page: 9 ident: b0010 article-title: MIMIC-III, a freely accessible critical care database publication-title: Scient. Data – reference: R. Řehuřek, P. Sojka, Software Framework for Topic Modelling with Large Corpora, in: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, ELRA, Valletta, Malta, 2010, pp. 45–50, http://is.muni.cz/publication/884893/en. – reference: R.M. Monarch, Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI, Shelter Island, NY: Manning Publications Company, 2021, version 11, MEAP Edition (Manning Early Access Program). – reference: P. Cao, Y. Chen, K. Liu, J. Zhao, S. Liu, W. Chong, HyperCore: Hyperbolic and co-graph representation for automatic ICD coding, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 3105–3114. doi:10.18653/v1/2020.acl-main.282. – reference: E. Gibaja, S. Ventura, A tutorial on multilabel learning, ACM Computing Survey 47 (3) (2015) 52:1–52:38. – reference: T. Baumel, J. Nassour-Kassis, R. Cohen, M. Elhadad, N. Elhadad, Multi-label classification of patient notes: case study on ICD code assignment, in: Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 409–416. – reference: M. Falis, M. Pajak, A. Lisowska, P. Schrempf, L. Deckers, S. Mikhael, S. Tsaftaris, A. O’Neil, Ontological attention ensembles for capturing semantic concepts in ICD code prediction from clinical text, in: Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), Association for Computational Linguistics, Hong Kong, 2019, pp. 168–177. doi:10.18653/v1/D19-6220. – reference: S. Karimi, X. Dai, H. Hassanzadeh, A. Nguyen, Automatic diagnosis coding of radiology reports: A comparison of deep learning and conventional classification methods, in: BioNLP 2017, Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 328–332. doi:10.18653/v1/W17-2342. – volume: 18 start-page: 1338 year: 2006 end-page: 1351 ident: b0080 article-title: Multilabel neural networks with applications to functional genomics and text categorization publication-title: IEEE Trans. Knowl. Data Eng. – volume: 13 start-page: e0192360 year: 2018 ident: b0150 article-title: Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives publication-title: PloS one – start-page: 3132 year: 2018 end-page: 3142 ident: b0185 article-title: Few-shot and zero-shot multi-label learning for structured label spaces publication-title: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing – reference: A. Stewart, ICD-11 contains nearly 4x as many codes as ICD-10: Here’s what WHO has to say, https://www.beckersasc.com/asc-coding-billing-and-collections/icd-11-contains-nearly-4x-as-many-codes-as-icd-10-here-s-what-who-has-to-say.html, accessed 2 April, 2020 (2018). – reference: Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, E. Hovy, Hierarchical attention networks for document classification, in: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016, pp. 1480–1489. – start-page: 1746 year: 2014 end-page: 1751 ident: b0145 article-title: Convolutional neural networks for sentence classification publication-title: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) – reference: S. Baker, A. Korhonen, Initializing neural networks for hierarchical multi-label text classification, in: BioNLP 2017, Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 307–315. doi:10.18653/v1/W17-2339. – volume: 38 start-page: 50 year: 2017 end-page: 57 ident: b0045 article-title: European union regulations on algorithmic decision-making and a ”right to explanation” publication-title: AI Magaz. – start-page: 1724 year: 2014 ident: 10.1016/j.jbi.2021.103728_b0125 article-title: Learning phrase representations using RNN encoder–decoder for statistical machine translation, in – start-page: 667 year: 2010 ident: 10.1016/j.jbi.2021.103728_b0070 article-title: Mining multi-label data – ident: 10.1016/j.jbi.2021.103728_b0195 doi: 10.18653/v1/2020.acl-main.282 – ident: 10.1016/j.jbi.2021.103728_b0035 doi: 10.18653/v1/N18-1100 – volume: 3 start-page: 1 issue: 1 year: 2016 ident: 10.1016/j.jbi.2021.103728_b0010 article-title: MIMIC-III, a freely accessible critical care database publication-title: Scient. Data doi: 10.1038/sdata.2016.35 – ident: 10.1016/j.jbi.2021.103728_b0115 doi: 10.18653/v1/W17-2339 – ident: 10.1016/j.jbi.2021.103728_b0120 – ident: 10.1016/j.jbi.2021.103728_b0095 – start-page: 4613 year: 2019 ident: 10.1016/j.jbi.2021.103728_b0140 article-title: Medical concept embedding with multiple ontological representations – volume: 36 start-page: 1234 issue: 4 year: 2019 ident: 10.1016/j.jbi.2021.103728_b0175 article-title: BioBERT: a pre-trained biomedical language representation model for biomedical text mining publication-title: Bioinformatics doi: 10.1093/bioinformatics/btz682 – ident: 10.1016/j.jbi.2021.103728_b0020 – ident: 10.1016/j.jbi.2021.103728_b0030 doi: 10.18653/v1/W17-2342 – ident: 10.1016/j.jbi.2021.103728_b0005 – ident: 10.1016/j.jbi.2021.103728_b0135 – volume: 2 start-page: 588 issue: 10 year: 2013 ident: 10.1016/j.jbi.2021.103728_b0015 article-title: ICD-9-CM to ICD-10-CM codes: What? why? how? publication-title: Adv. Wound Care doi: 10.1089/wound.2013.0478 – volume: 38 start-page: 50 issue: 3 year: 2017 ident: 10.1016/j.jbi.2021.103728_b0045 article-title: European union regulations on algorithmic decision-making and a ”right to explanation” publication-title: AI Magaz. doi: 10.1609/aimag.v38i3.2741 – volume: 26 start-page: 1819 issue: 8 year: 2014 ident: 10.1016/j.jbi.2021.103728_b0065 article-title: A review on multi-label learning algorithms publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/TKDE.2013.39 – volume: 70 start-page: 329 issue: 4 year: 2019 ident: 10.1016/j.jbi.2021.103728_b0040 article-title: Ethics of artificial intelligence in radiology: summary of the joint european and north american multisociety statement publication-title: Can. Assoc. Radiol. J. doi: 10.1016/j.carj.2019.08.010 – ident: 10.1016/j.jbi.2021.103728_b0180 doi: 10.1016/j.patrec.2005.10.010 – start-page: 3132 year: 2018 ident: 10.1016/j.jbi.2021.103728_b0185 article-title: Few-shot and zero-shot multi-label learning for structured label spaces – volume: 13 start-page: e0192360 issue: 2 year: 2018 ident: 10.1016/j.jbi.2021.103728_b0150 article-title: Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives publication-title: PloS one doi: 10.1371/journal.pone.0192360 – ident: 10.1016/j.jbi.2021.103728_b0055 doi: 10.1007/978-3-662-44851-9_28 – ident: 10.1016/j.jbi.2021.103728_b0060 doi: 10.3115/1572392.1572411 – start-page: 249 year: 2010 ident: 10.1016/j.jbi.2021.103728_b0160 article-title: Understanding the difficulty of training deep feedforward neural networks, in – volume: 17 start-page: 646 issue: 6 year: 2010 ident: 10.1016/j.jbi.2021.103728_b0025 article-title: A systematic literature review of automated clinical coding and classification systems publication-title: J. Am. Med. Inform. Assoc.: JAMIA doi: 10.1136/jamia.2009.001024 – start-page: 521 year: 2016 ident: 10.1016/j.jbi.2021.103728_b0110 article-title: Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence – ident: 10.1016/j.jbi.2021.103728_b0105 doi: 10.1145/2716262 – ident: 10.1016/j.jbi.2021.103728_b0200 doi: 10.18653/v1/2020.bionlp-1.8 – volume: 18 start-page: 1338 issue: 10 year: 2006 ident: 10.1016/j.jbi.2021.103728_b0080 article-title: Multilabel neural networks with applications to functional genomics and text categorization publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/TKDE.2006.162 – ident: 10.1016/j.jbi.2021.103728_b0100 – start-page: 265 year: 2016 ident: 10.1016/j.jbi.2021.103728_b0165 article-title: Tensorflow: A system for large-scale machine learning – ident: 10.1016/j.jbi.2021.103728_b0155 – ident: 10.1016/j.jbi.2021.103728_b0130 – start-page: 1746 year: 2014 ident: 10.1016/j.jbi.2021.103728_b0145 article-title: Convolutional neural networks for sentence classification – ident: 10.1016/j.jbi.2021.103728_b0190 doi: 10.18653/v1/D19-6220 – ident: 10.1016/j.jbi.2021.103728_b0090 doi: 10.18653/v1/2020.emnlp-main.607 – start-page: 8024 year: 2019 ident: 10.1016/j.jbi.2021.103728_b0170 article-title: Pytorch: An imperative style, high-Performance deep learning library – start-page: 1 year: 2020 ident: 10.1016/j.jbi.2021.103728_b0075 article-title: Automated social text annotation with joint multilabel attention networks publication-title: IEEE Trans. Neural Networks Learn. Syst. – start-page: 4171 year: 2019 ident: 10.1016/j.jbi.2021.103728_b0085 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – ident: 10.1016/j.jbi.2021.103728_b0050 doi: 10.18653/v1/N16-1174 – ident: 10.1016/j.jbi.2021.103728_b0205 |
SSID | ssj0011556 |
Score | 2.537378 |
Snippet | [Display omitted]
•Explainable automated medical coding through attention-based deep learning.•The model highlights key words and sentences for each... Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done... |
SourceID | proquest pubmed crossref elsevier |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 103728 |
SubjectTerms | Attention Mechanisms Automated medical coding Clinical Coding - methods Clinical Coding - statistics & numerical data COVID-19 - epidemiology Deep Learning Electronic Health Records - statistics & numerical data Explainability Humans Label correlation Medical Informatics Multi-label classification Natural Language Processing Neural Networks, Computer Pandemics - statistics & numerical data SARS-CoV-2 United Kingdom - epidemiology |
Title | Explainable automated coding of clinical notes using hierarchical label-wise attention networks and label embedding initialisation |
URI | https://dx.doi.org/10.1016/j.jbi.2021.103728 https://www.ncbi.nlm.nih.gov/pubmed/33711543 https://www.proquest.com/docview/2501266540 |
Volume | 116 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB61WwlRIQTLa3lURuKEFHb9iptjqai2oPYClfYWxS-UqiRVNytuHPjlzMTJCiTogaMTO7E8E8_nzDczAG9ckJZqbGciVHmmrNRZhWdbbJogCtyURZ9d_-w8X16ojyu92oHjMRaGaJXD3p_29H63Hq7Mh9WcX9f1_DOnmgaKAjUJeBi9C3tCFrmewN7R6afl-daZgCYzT2lTBTEZ1ejc7Glel7bGU6LgffQ51WT_u3n6F_zszdDJA7g_4Ed2lKb4EHZCM4X937IKTuHO2eAvn8K99FeOpWCjR_CTKHdDvBSrNl2LeDV45lqyYKyNbAyUZE2LGJQRK_4ro3LZvcOBbqDWhKvse73GB3RdIkuyJpHJ16xqfOrBwjcbfP_YmvhJlGexV4LHcHHy4cvxMhuqMGROatmhCVtEk1vNnTB5fogYyfvcVXgwCTIqJxEgOqt4MMpHzaN1lCIuOsRVhYw-HsonMGnaJjwDhqc74WNhNDcRgaAulK6Et1YpYUzF_QwW4-KXbkhRTpUyrsqRi3ZZorxKkleZ5DWDt9sh1yk_x22d1SjR8g8lK9F-3Dbs9Sj9Ej8-8qhUTWg36xLxIxdUv3kxg6dJLbazkNJQqiP5_P9e-gLuUiuRhF7CpLvZhFeIfzp7ALvvfvCDQcuxdbp6_wv75QRx |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9NAEB6VIgEVQhCgDc9F4oRkJd6HtzmWiiqFphdaKbeV91W5KnbVOOLOL--M145Agh44-r3yjHe-9XzzDcBHF4SlHtsZD2WRSStUVuLaFjd14DOclHmnrr84Lebn8utSLbfgcKiFIVplP_enOb2brfs9k_5tTq6ravI9p54Gkgo1CXhodQ_uIxooSED_ePl5k0rAgFkk0VROPEY5pDY7ktelrXCNyPOu9pw6sv89OP0LfHZB6OgpPOnRIztIA3wGW6Eewc5vmoIjeLDos-UjeJz-ybFUavQcfhHhrq-WYuW6bRCtBs9cQ_GLNZENZZKsbhCBMuLEXzBqlt2lG-gA-ky4yn5WK7xB2yaqJKsTlXzFytqnM1j4YYPvblsRO4lUFjsXeAHnR1_ODudZ34Mhc0KJFgPYNOrCqtxxXRT7iJC8L1yJy5IgonQC4aGzMg9a-qjyaB0JxEWHqGomoo_74iVs100d9oDh2o77ONMq1xFhoJpJVXJvrZRc6zL3Y5gOL9-4XqCc-mRcmYGJdmnQXobsZZK9xvBpc8l1Uue462Q5WNT84WIGo8ddl30YrG_w06N8SlmHZr0yiB5zTt2bp2PYTW6xGYUQmoSOxKv_e-h7eDg_W5yYk-PTb6_hER1JdKE3sN3erMNbREKtfdd5-i0rLgRF |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explainable+automated+coding+of+clinical+notes+using+hierarchical+label-wise+attention+networks+and+label+embedding+initialisation&rft.jtitle=Journal+of+biomedical+informatics&rft.au=Dong%2C+Hang&rft.au=Su%C3%A1rez-Paniagua%2C+V%C3%ADctor&rft.au=Whiteley%2C+William&rft.au=Wu%2C+Honghan&rft.date=2021-04-01&rft.issn=1532-0464&rft.volume=116&rft.spage=103728&rft_id=info:doi/10.1016%2Fj.jbi.2021.103728&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_jbi_2021_103728 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1532-0464&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1532-0464&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1532-0464&client=summon |