DeepUNet: A Deep Fully Convolutional Network for Pixel-Level Sea-Land Segmentation
Semantic segmentation is a fundamental research in optical remote sensing image processing. Because of the complex maritime environment, the sea-land segmentation is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there w...
Saved in:
Published in | IEEE journal of selected topics in applied earth observations and remote sensing Vol. 11; no. 11; pp. 3954 - 3962 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.11.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1939-1404 2151-1535 |
DOI | 10.1109/JSTARS.2018.2833382 |
Cover
Loading…
Summary: | Semantic segmentation is a fundamental research in optical remote sensing image processing. Because of the complex maritime environment, the sea-land segmentation is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there were a few of works using CNN for sea-land segmentation and the results could be further improved. This paper proposes a novel deep convolution neural network named DeepUNet. Like the U-Net, its structure has a contracting path and an expansive path to get high-resolution optical output. But differently, the DeepUNet uses DownBlocks instead of convolution layers in the contracting path and uses UpBlock in the expansive path. The two novel blocks bring two new connections that are U-connection and Plus connection. They are promoted to get more precise segmentation results. To verify the network architecture, we construct a new challenging sea-land dataset and compare the DeepUNet on it with the U-Net, SegNet, and SeNet. Experimental results show that DeepUNet can improve 1-2% accuracy performance compared with other architectures, especially in high-resolution optical remote sensing imagery. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2018.2833382 |