Remote Sensing Image Segmentation for Aircraft Recognition Using U-Net as Deep Learning Architecture

Recognizing aircraft automatically by using satellite images has different applications in both the civil and military sectors. However, due to the complexity and variety of the foreground and background of the analyzed images, it remains challenging to obtain a suitable representation of aircraft f...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 14; no. 6; p. 2639
Main Authors Shaar, Fadi, Yılmaz, Arif, Topcu, Ahmet Ercan, Alzoubi, Yehia Ibrahim
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recognizing aircraft automatically by using satellite images has different applications in both the civil and military sectors. However, due to the complexity and variety of the foreground and background of the analyzed images, it remains challenging to obtain a suitable representation of aircraft for identification. Many studies and solutions have been presented in the literature, but only a few studies have suggested handling the issue using semantic image segmentation techniques due to the lack of publicly labeled datasets. With the advancement of CNNs, researchers have presented some CNN architectures, such as U-Net, which has the ability to obtain very good performance using a small training dataset. The U-Net architecture has received much attention for segmenting 2D and 3D biomedical images and has been recognized to be highly successful for pixel-wise satellite image classification. In this paper, we propose a binary image segmentation model to recognize aircraft by exploiting and adopting the U-Net architecture for remote sensing satellite images. The proposed model does not require a significant amount of labeled data and alleviates the need for manual aircraft feature extraction. The public dense labeling remote sensing dataset is used to perform the experiments and measure the robustness and performance of the proposed model. The mean IoU and pixel accuracy are adopted as metrics to assess the obtained results. The results in the testing dataset indicate that the proposed model can achieve a 95.08% mean IoU and a pixel accuracy of 98.24%.
ISSN:2076-3417
2076-3417
DOI:10.3390/app14062639