Deep learning-based burned forest areas mapping via Sentinel-2 imagery: a comparative study

In order to evaluate the effects of forest fires on the dynamics of the function and structure of ecosystems, it is necessary to determine burned forest areas with high accuracy, effectively, economically, and practically using satellite images. Extraction of burned forest areas utilizing high-resol...

Full description

Saved in:
Bibliographic Details
Published inEnvironmental science and pollution research international Vol. 31; no. 4; pp. 5304 - 5318
Main Authors Atasever, Ümit Haluk, Tercan, Emre
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.01.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In order to evaluate the effects of forest fires on the dynamics of the function and structure of ecosystems, it is necessary to determine burned forest areas with high accuracy, effectively, economically, and practically using satellite images. Extraction of burned forest areas utilizing high-resolution satellite images and image classification algorithms and assessing the successfulness of varied classification algorithms has become a prominent research field. This study aims to indicate on the capability of the deep learning-based Stacked Autoencoders method for the burned forest areas mapping from Sentinel-2 satellite images. The Stacked Autoencoders, used in this study as an unsupervised learning method, were compared qualitatively and quantitatively with frequently used supervised learning algorithms (k-Nearest Neighbors (k-NN), Subspaced k-NN, Support Vector Machines, Random Forest, Bagged Decision Tree, Naive Bayes, Linear Discriminant Analysis) on two distinct burnt forest zones. By selecting burned forest zones with contrasting structural characteristics from one another, an objective assessment was achieved. Manually digitized burned areas from Sentinel-2 satellite images were utilized for accuracy assessment. For comparison, different classification performance and quality metrics (Overall Accuracy, Mean Squared Error, Correlation Coefficient, Structural Similarity Index Measure, Peak Signal-to-Noise Ratio, Universal Image Quality Index, and KAPPA metrics) were used. In addition, whether the Stacked Autoencoders method produces consistent results was examined through boxplots. In terms of both quantitative and qualitative analysis, the Stacked Autoencoders method showed the highest accuracy values.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1614-7499
0944-1344
1614-7499
DOI:10.1007/s11356-023-31575-5