NuSEA: Nuclei Segmentation With Ellipse Annotations

Objective: Nuclei segmentation is a crucial pre-task for pathological microenvironment quantification. However, the acquisition of manually precise nuclei annotations for improving the performance of deep learning models is time-consuming and expensive. Methods: In this paper, an efficient nuclear a...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of biomedical and health informatics Vol. 28; no. 10; pp. 5996 - 6007
Main Authors Meng, Zhu, Dong, Junhao, Zhang, Binyu, Li, Shichao, Wu, Ruixiao, Su, Fei, Wang, Guangxi, Guo, Limei, Zhao, Zhicheng
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objective: Nuclei segmentation is a crucial pre-task for pathological microenvironment quantification. However, the acquisition of manually precise nuclei annotations for improving the performance of deep learning models is time-consuming and expensive. Methods: In this paper, an efficient nuclear annotation tool called NuSEA is proposed to achieve accurate nucleus segmentation, where a simple but effective ellipse annotation is applied. Specifically, the core network U-Light of NuSEA is lightweight with only 0.86 M parameters, which is suitable for real-time nuclei segmentation. In addition, an Elliptical Field Loss and a Texture Loss are proposed to enhance the edge segmentation and constrain the smoothness simultaneously. Results: Extensive experiments on three public datasets (MoNuSeg, CPM-17, and CoNSeP) demonstrate that NuSEA is superior to the state-of-the-art (SOTA) methods and better than existing algorithms based on point, rectangle, and text annotations. Conclusions: With the assistance of NuSEA, a new dataset called NuSEA-dataset v1.0, encompassing 118,857 annotated nuclei from the whole-slide images of 12 organs is released. Significance: NuSEA provides a rapid and effective annotation tool for nuclei in histopathological images, benefiting future explorations in deep learning algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2168-2194
2168-2208
2168-2208
DOI:10.1109/JBHI.2024.3418106