Who's your data? Primary immune deficiency differential diagnosis prediction via machine learning and data mining of the USIDNET registry

There are currently more than 480 primary immune deficiency (PID) diseases and about 7000 rare diseases that together afflict around 1 in every 17 humans. Computational aids based on data mining and machine learning might facilitate the diagnostic task by extracting rules from large datasets and mak...

Full description

Saved in:
Bibliographic Details
Published inClinical immunology (Orlando, Fla.) Vol. 255; p. 109759
Main Authors Méndez Barrera, Jose Alfredo, Rocha Guzmán, Samuel, Hierro Cascajares, Elisa, Garabedian, Elizabeth K., Fuleihan, Ramsay L., Sullivan, Kathleen E., Lugo Reyes, Saul O.
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:There are currently more than 480 primary immune deficiency (PID) diseases and about 7000 rare diseases that together afflict around 1 in every 17 humans. Computational aids based on data mining and machine learning might facilitate the diagnostic task by extracting rules from large datasets and making predictions when faced with new problem cases. In a proof-of-concept data mining study, we aimed to predict PID diagnoses using a supervised machine learning algorithm based on classification tree boosting. Through a data query at the USIDNET registry we obtained a database of 2396 patients with common diagnoses of PID, including their clinical and laboratory features. We kept 286 features and all 12 diagnoses to include in the model. We used the XGBoost package with parallel tree boosting for the supervised classification model, and SHAP for variable importance interpretation, on Python v3.7. The patient database was split into training and testing subsets, and after boosting through gradient descent, the predictive model provides measures of diagnostic prediction accuracy and individual feature importance. After a baseline performance test, we used the Class Weighting Hyperparameter, or scale_pos_weight to correct for imbalanced classification. The twelve PID diagnoses were CVID (1098 patients), DiGeorge syndrome, Chronic granulomatous disease, Congenital agammaglobulinemia, PID not otherwise classified, Specific antibody deficiency, Complement deficiency, Hyper-IgM, Leukocyte adhesion deficiency, ectodermal dysplasia with immune deficiency, Severe combined immune deficiency, and Wiskott-Aldrich syndrome. For CVID, the model found an accuracy on the train sample of 0.80, with an area under the ROC curve (AUC) of 0.80, and a Gini coefficient of 0.60. In the test subset, accuracy was 0.76, AUC 0.75, and Gini 0.51. The positive feature value to predict CVID was highest for upper respiratory infections, asthma, autoimmunity and hypogammaglobulinemia. Features with the highest negative predictive value were high IgE, growth delay, abscess, lymphopenia, and congenital heart disease. For the rest of the diagnoses, accuracy stayed between 0.75 and 0.99, AUC 0.46–0.87, Gini 0.07–0.75, and LogLoss 0.09–8.55. Clinicians should remember to consider the negative predictive features together with the positives. We are calling this a proof-of-concept study to continue with our explorations. A good performance is encouraging, and feature importance might aid feature selection for future endeavors. In the meantime, we can learn from the rules derived by the model and build a user-friendly decision tree to generate differential diagnoses. •We aimed to predict primary immune deficiency diagnoses using a supervised machine learning algorithm based on classification tree boosting.•We obtained and curated a database of 2396 patients with common diagnoses of PID, including their clinical and laboratory features.•We kept 286 features and all 12 diagnoses to include in the model. For the interpretation of variables, to each feature an importance value. Each diagnosis is differentiated or predicted against all others. The patient database is split randomly into training and testing subsets. We found a good performance to predict any of the twelve diagnoses. Accuracy and Area Under the ROC Curve stayed between 0.70 and 0.80 for most diseases, and Gini indexes were around 0.50.•Predictive performance plummeted when the number of disease representatives fell under 50–60 cases.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1521-6616
1521-7035
1521-7035
DOI:10.1016/j.clim.2023.109759