Building extraction from remote sensing images using deep residual U-Net

Building extraction is a fundamental area of research in the field of remote sensing. In this paper, we propose an efficient model called residual U-Net (RU-Net) to extract buildings. It combines the advantages of U-Net, residual learning, atrous spatial pyramid pooling, and focal loss. The U-Net mo...

Full description

Saved in:
Bibliographic Details
Published inEuropean journal of remote sensing Vol. 55; no. 1; pp. 71 - 85
Main Authors Wang, Haiying, Miao, Fang
Format Journal Article
LanguageEnglish
Published Cagiari Taylor & Francis 31.12.2022
Taylor & Francis Ltd
Taylor & Francis Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Building extraction is a fundamental area of research in the field of remote sensing. In this paper, we propose an efficient model called residual U-Net (RU-Net) to extract buildings. It combines the advantages of U-Net, residual learning, atrous spatial pyramid pooling, and focal loss. The U-Net model, based on modified residual learning, can reduce the parameters and degradation of the network; atrous spatial pyramid pooling can acquire multiscale features and context information of the sensing images; and focal loss can solve the problem of unbalanced categories in classification. We implemented it on the WHU aerial image dataset and the Inria aerial image labeling dataset. The results of RU-Net were compared with the results of U-Net, FastFCN, DeepLabV3+, Web-Net, and SegNet. Experimental results show that the proposed RU-Net is superior to the others in all aspects of the WHU dataset. For the Inria dataset, most evaluation metrics of RU-Net are better than the others, as well as the sharp, boundary, and multiscale information. Compared with FastFCN and DeepLabV3+, our method increases the efficiency by three to four times.
ISSN:2279-7254
2279-7254
DOI:10.1080/22797254.2021.2018944