Fusing Local and Global Features for High-Resolution Scene Classification
In this paper, a fused global saliency-based multiscale multiresolution multistructure local binary pattern (salM 3 LBP) feature and local codebookless model (CLM) feature is proposed for high-resolution image scene classification. First, two different but complementary types of descriptors (pixel i...
Saved in:
Published in | IEEE journal of selected topics in applied earth observations and remote sensing Vol. 10; no. 6; pp. 2889 - 2901 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.06.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1939-1404 2151-1535 |
DOI | 10.1109/JSTARS.2017.2683799 |
Cover
Loading…
Summary: | In this paper, a fused global saliency-based multiscale multiresolution multistructure local binary pattern (salM 3 LBP) feature and local codebookless model (CLM) feature is proposed for high-resolution image scene classification. First, two different but complementary types of descriptors (pixel intensities and differences) are developed to extract global features, characterizing the dominant spatial features in multiple scale, multiple resolution, and multiple structure manner. The micro/macrostructure information and rotation invariance are guaranteed in the global feature extraction process. For dense local feature extraction, CLM is utilized to model local enrichment scale invariant feature transform descriptor and dimension reduction is conducted via joint low-rank learning with support vector machine. Finally, a fused feature representation between salM 3 LBP and CLM as the scene descriptor to train a kernel-based extreme learning machine for scene classification is presented. The proposed approach is extensively evaluated on three challenging benchmark scene datasets (the 21-class land-use scene, 19-class satellite scene, and a newly available 30-class aerial scene), and the experimental results show that the proposed approach leads to superior classification performance compared with the state-of-the-art classification methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2017.2683799 |