Mumford-Shah Loss Functional for Image Segmentation With Deep Learning
Recent state-of-the-art image segmentation algorithms are mostly based on deep neural networks, thanks to their high performance and fast computation time. However, these methods are usually trained in a supervised manner, which requires large number of high quality ground-truth segmentation masks....
Saved in:
Published in | IEEE transactions on image processing Vol. 29; pp. 1856 - 1866 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent state-of-the-art image segmentation algorithms are mostly based on deep neural networks, thanks to their high performance and fast computation time. However, these methods are usually trained in a supervised manner, which requires large number of high quality ground-truth segmentation masks. On the other hand, classical image segmentation approaches such as level-set methods are formulated in a self-supervised manner by minimizing energy functions such as Mumford-Shah functional, so they are still useful to help generate segmentation masks without labels. Unfortunately, these algorithms are usually computationally expensive and often have limitation in semantic segmentation. In this paper, we propose a novel loss function based on Mumford-Shah functional that can be used in deep-learning based image segmentation without or with small labeled data. This loss function is based on the observation that the softmax layer of deep neural networks has striking similarity to the characteristic function in the Mumford-Shah functional. We show that the new loss function enables semi-supervised and unsupervised segmentation. In addition, our loss function can also be used as a regularized function to enhance supervised semantic segmentation algorithms. Experimental results on multiple datasets demonstrate the effectiveness of the proposed method. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2019.2941265 |