Ultrasound Breast Image Classification Through Domain Knowledge Integration Into Deep Neural Networks

Current deep learning methods used for classifying ultrasound breast images have difficulty to learn and generalize with small training datasets containing images with tumors of varying size and shapes. A model that integrates domain knowledge into the classification model is proposed. The proposed...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 12; pp. 112966 - 112983
Main Authors Nehary, Ebrahim A., Rajan, Sreeraman
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Current deep learning methods used for classifying ultrasound breast images have difficulty to learn and generalize with small training datasets containing images with tumors of varying size and shapes. A model that integrates domain knowledge into the classification model is proposed. The proposed model consists of two shallow CNN streams (image stream and mask stream), guide blocks, multiscale fusion block, and classification layers. The image stream extracts features from the ultrasound (US) image, while the mask stream extracts features from either the ground truth (GT) mask or the mask generated by either U-net or selective U-Net (SU-net). Guide blocks fuse the features provided by the mask stream into the image stream and by reweighting the features from the image stream, help the model focus on the tumor area along with the affected cells around them. The multiscale fusion block aggregates features from various levels of the image stream to address the various sizes of the tumors. Finally, classification layers provide the final decisions. The proposed model is trained using two approaches after augmenting the training set. The first approach trains the model using US images with an associated GT mask and then tests it using the US images with their predicted mask provided by either U-net or SU-net. The second approach is similar to the first one, except that the trained model is retrained using US images and the predicted mask. The proposed model outperforms 15 state-of- the-art methods, with about 7.5% increase in balanced accuracy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3442374