End-to-End Convolutional Neural Network Framework for Breast Ultrasound Analysis Using Multiple Parametric Images Generated from Radiofrequency Signals

Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a lack of tr...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 12; no. 10; p. 4942
Main Authors Kim, Soohyun, Park, Juyoung, Yi, Joonhwan, Kim, Hyungsuk
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a lack of training datasets makes BUS analysis challenging in clinical applications. In this study, we proposed an end-to-end CNN framework for BUS analysis using multiple parametric images generated from radiofrequency (RF) signals. The entropy and phase images, which represent the microstructural and anatomical information, respectively, and the traditional B-mode images were used as parametric images in the time domain. In addition, the attenuation image, estimated from the frequency domain using RF signals, was used for the spectral features. Because one set of RF signals from one patient produced multiple images as CNN inputs, the proposed framework overcame the limitation of datasets in a broad sense of data augmentation while providing complementary information to compensate for the low quality of the B-mode images. The experimental results showed that the proposed architecture improved the classification accuracy and recall by 5.5% and 11.6%, respectively, compared with the traditional approach using only B-mode images. The proposed framework can be extended to various other parametric images in both the time and frequency domains using deep neural networks to improve its performance.
ISSN:2076-3417
2076-3417
DOI:10.3390/app12104942