Residual Dual Attention Generative Adversarial Networks for Tissue Segmentation on Histopathological Image

Breast cancer is a common malignant tumor among women, posing a significant threat to their health. Pathological images are considered the standard for breast cancer detection and diagnosis, where the epithelial tissue and stromal tissue within them play a crucial role in the tumor microenvironment....

Full description

Saved in:
Bibliographic Details
Published in2024 2nd International Conference on Pattern Recognition, Machine Vision and Intelligent Algorithms (PRMVIA) pp. 59 - 63
Main Authors Guan, Xi, Li, Junwei, Shao, Wei
Format Conference Proceeding
LanguageEnglish
Published IEEE 24.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Breast cancer is a common malignant tumor among women, posing a significant threat to their health. Pathological images are considered the standard for breast cancer detection and diagnosis, where the epithelial tissue and stromal tissue within them play a crucial role in the tumor microenvironment. However, accurate segmentation of epithelial tissue and stromal tissue in breast cancer pathological images is challenging due to their complex structures and high-density data. Despite the impressive performance of deep learning methods, they only focus on pixel level, neglecting the high-order consistency of the images. Therefore, we proposed a residual dual attention generative adversarial network model for breast cancer pathological image segmentation, called RDAGAN. The RDAGAN incorporates the generative adversarial networks and adopts dual attention mechanisms to capture visual feature dependencies in both spatial and channel dimensions. It ensures overall structural consistency in the segmentation results. We evaluated our method on two datasets, i.e., the NKI and VGH, and achieved promising performance than the comparing methods.
DOI:10.1109/PRMVIA63497.2024.00018