Generalisable deep learning method for mammographic density prediction across imaging techniques and self-reported race

Background Breast density is an important risk factor for breast cancer complemented by a higher risk of cancers being missed during screening of dense breasts due to reduced sensitivity of mammography. Automated, deep learning-based prediction of breast density could provide subject-specific risk a...

Full description

Saved in:
Bibliographic Details
Published inCommunications medicine Vol. 4; no. 1; pp. 21 - 8
Main Authors Khara, Galvin, Trivedi, Hari, Newell, Mary S., Patel, Ravi, Rijken, Tobias, Kecskemethy, Peter, Glocker, Ben
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 19.02.2024
Springer Nature B.V
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background Breast density is an important risk factor for breast cancer complemented by a higher risk of cancers being missed during screening of dense breasts due to reduced sensitivity of mammography. Automated, deep learning-based prediction of breast density could provide subject-specific risk assessment and flag difficult cases during screening. However, there is a lack of evidence for generalisability across imaging techniques and, importantly, across race. Methods This study used a large, racially diverse dataset with 69,697 mammographic studies comprising 451,642 individual images from 23,057 female participants. A deep learning model was developed for four-class BI-RADS density prediction. A comprehensive performance evaluation assessed the generalisability across two imaging techniques, full-field digital mammography (FFDM) and two-dimensional synthetic (2DS) mammography. A detailed subgroup performance and bias analysis assessed the generalisability across participants’ race. Results Here we show that a model trained on FFDM-only achieves a 4-class BI-RADS classification accuracy of 80.5% (79.7–81.4) on FFDM and 79.4% (78.5–80.2) on unseen 2DS data. When trained on both FFDM and 2DS images, the performance increases to 82.3% (81.4–83.0) and 82.3% (81.3–83.1). Racial subgroup analysis shows unbiased performance across Black, White, and Asian participants, despite a separate analysis confirming that race can be predicted from the images with a high accuracy of 86.7% (86.0–87.4). Conclusions Deep learning-based breast density prediction generalises across imaging techniques and race. No substantial disparities are found for any subgroup, including races that were never seen during model development, suggesting that density predictions are unbiased. Khara et al perform a comprehensive performance analysis for a deep learning breast density prediction model using a large scale, racially diverse dataset. They find that the model generalises across imaging techniques and self-reported race, providing assurances for the safe and ethical use of automated breast density prediction. Plain language summary Women with dense breasts have a higher risk of breast cancer. For dense breasts, it is also more difficult to spot cancer in mammograms, which are the X-ray images commonly used for breast cancer screening. Thus, knowing about an individual’s breast density provides important information to doctors and screening participants. This study investigated whether an artificial intelligence algorithm (AI) can be used to accurately determine the breast density by analysing mammograms. The study tested whether such an algorithm performs equally well across different imaging devices, and importantly, across individuals from different self-reported race groups. A large, racially diverse dataset was used to evaluate the algorithm’s performance. The results show that there were no substantial differences in the accuracy for any of the groups, providing important assurances that AI can be used safely and ethically for automated prediction of breast density.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2730-664X
2730-664X
DOI:10.1038/s43856-024-00446-6