DOCC: Deep one-class crop classification via positive and unlabeled learning for multi-modal satellite imagery
•A deep one-class crop classification framework is proposed for multi-modal imagery.•DOCC takes one target crop samples as the input thus avoiding redundant labeling.•A one-class crop extraction loss is designed for positive and unlabeled learning.•Experiment reveals the optimal data source to ident...
Saved in:
Published in | International journal of applied earth observation and geoinformation Vol. 105; p. 102598 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
25.12.2021
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A deep one-class crop classification framework is proposed for multi-modal imagery.•DOCC takes one target crop samples as the input thus avoiding redundant labeling.•A one-class crop extraction loss is designed for positive and unlabeled learning.•Experiment reveals the optimal data source to identify winter wheat and rapeseed.
Large-scale crop mapping is an important task in agricultural resource monitoring, but it does usually require the ground-truth labels of all the land-cover types in the remotely sensed imagery. However, labeling each land-cover type is time-consuming and labor-intensive. One-class classification, which only needs samples of the class of interest, can solve the problem of redundant labeling. However, the traditional one-class classifiers require well-designed features to realize fine classification, and are thus difficult to apply to complex multi-modal remote sensing data, i.e., optical imagery and synthetic aperture radar (SAR) imagery. In this paper, a deep one-class crop (DOCC) framework that includes a deep one-class crop extraction module and a one-class crop extraction loss module is proposed for large-scale one-class crop mapping. The DOCC framework takes only the samples of one target class as the input to extract the crop of interest by positive and unlabeled learning and can automatically extract the features for one-class crop mapping, without requiring a large amount of labeling for all the land-cover type or feature design based on prior expert knowledge. Experiments conducted on multi-modal remote sensing data, i.e., Zhuhai-1 hyperspectral satellite data, Sentinel-2 multispectral time-series satellite data and Sentinel-1 SAR satellite data, illustrate that DOCC can automatically extract the effective features for one-class classification from multi-modal satellite imagery and reaches the highest F1 accuracy compared with other methods on respective satellite imagery. The results also reveal the different performance of multi-modal satellite imagery when they are used to extract different crop types. Meanwhile, the feasibility of DOCC for multi-modal data can be beneficial for large-scale mapping under different conditions when the samples of multi-class are difficult to obtain. |
---|---|
ISSN: | 1569-8432 1872-826X |
DOI: | 10.1016/j.jag.2021.102598 |