GCSANet: A Global Context Spatial Attention Deep Learning Network for Remote Sensing Scene Classification

Deep convolutional neural networks have become an indispensable method in remote sensing image scene classification because of their powerful feature extraction capabilities. However, the ability of the models to extract multiscale features and global features on surface objects of complex scenes is...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of selected topics in applied earth observations and remote sensing Vol. 15; pp. 1150 - 1162
Main Authors Chen, Weitao, Ouyang, Shubing, Tong, Wei, Li, Xianju, Zheng, Xiongwei, Wang, Lizhe
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Deep convolutional neural networks have become an indispensable method in remote sensing image scene classification because of their powerful feature extraction capabilities. However, the ability of the models to extract multiscale features and global features on surface objects of complex scenes is currently insufficient. We propose a framework based on global context spatial attention (GCSA) and densely connected convolutional networks to extract multiscale global scene features, called GCSANet. The mixup operation is used to enhance the spatial mixed data of remote sensing images, and the discrete sample space is rendered continuous to improve the smoothness in the neighborhood of the data space. The characteristics of multiscale surface objects are extracted, and their internal dense connection is strengthened by the densely connected backbone network. GCSA is introduced into the densely connected backbone network to encode the context information of the remote sensing scene image into the local features. Experiments were performed on four remote sensing scene datasets to evaluate the performance of GCSANet. The GCSANet achieved the highest classification precision on AID and NWPU datasets and the second-best performance on the UC Merced dataset, indicating the GCSANet can effectively extract the global features of remote sensing images. In addition, the GCSANet presents the highest classification accuracy on the constructed mountain image scene dataset. These results reveal that the GCSANet can effectively extract multiscale global scene features on complex remote sensing scenes. The source codes of this method can be foundin https://github.com/ShubingOuyangcug/GCSANet .
AbstractList Deep convolutional neural networks have become an indispensable method in remote sensing image scene classification because of their powerful feature extraction capabilities. However, the ability of the models to extract multiscale features and global features on surface objects of complex scenes is currently insufficient. We propose a framework based on global context spatial attention (GCSA) and densely connected convolutional networks to extract multiscale global scene features, called GCSANet. The mixup operation is used to enhance the spatial mixed data of remote sensing images, and the discrete sample space is rendered continuous to improve the smoothness in the neighborhood of the data space. The characteristics of multiscale surface objects are extracted, and their internal dense connection is strengthened by the densely connected backbone network. GCSA is introduced into the densely connected backbone network to encode the context information of the remote sensing scene image into the local features. Experiments were performed on four remote sensing scene datasets to evaluate the performance of GCSANet. The GCSANet achieved the highest classification precision on AID and NWPU datasets and the second-best performance on the UC Merced dataset, indicating the GCSANet can effectively extract the global features of remote sensing images. In addition, the GCSANet presents the highest classification accuracy on the constructed mountain image scene dataset. These results reveal that the GCSANet can effectively extract multiscale global scene features on complex remote sensing scenes. The source codes of this method can be foundin https://github.com/ShubingOuyangcug/GCSANet.
Author Chen, Weitao
Tong, Wei
Li, Xianju
Wang, Lizhe
Ouyang, Shubing
Zheng, Xiongwei
Author_xml – sequence: 1
  givenname: Weitao
  orcidid: 0000-0002-6272-1618
  surname: Chen
  fullname: Chen, Weitao
  email: wtchen@cug.edu.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 2
  givenname: Shubing
  orcidid: 0000-0003-4737-4205
  surname: Ouyang
  fullname: Ouyang, Shubing
  email: oysb@cug.edu.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 3
  givenname: Wei
  orcidid: 0000-0003-2873-7584
  surname: Tong
  fullname: Tong, Wei
  email: weitong@cug.edu.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 4
  givenname: Xianju
  orcidid: 0000-0001-7785-2541
  surname: Li
  fullname: Li, Xianju
  email: ddwhlxj@cug.edu.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 5
  givenname: Xiongwei
  surname: Zheng
  fullname: Zheng, Xiongwei
  email: zhengxiongwei@mail.cgs.gov.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 6
  givenname: Lizhe
  orcidid: 0000-0003-2766-0845
  surname: Wang
  fullname: Wang, Lizhe
  email: lizhe.wang@gmail.com
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
BookMark eNp9kU9v1DAQxS1UJLaFT9CLJc7Z-k8c29yiBZaiFUhNOVu2M6m8pPbiuAK-fZOmcODAwRrZ837PM3rn6CymCAhdUrKllOirz91te9NtGWFsy2lNFWteoA2jglZUcHGGNlRzXdGa1K_Q-TQdCWmY1HyDwn7XtV-gvMMt3o_J2RHvUizwq-DuZEuY720pEEtIEb8HOOED2BxDvMMz9TPl73hIGd_AfSqAO4jT0uo8RMC70U5TGIK3C_0avRzsOMGb53qBvn38cLv7VB2-7q937aHyNVGl4j3ve1BOcRC1sIzCIIS0WnLnJbOkaaiX3klHfS10MywN0QiuLGOaKsIv0PXq2yd7NKcc7m3-bZIN5ukh5Ttjcwl-BOPAa-Zk78jgatYQRdx8JGuYmv_p_ez1dvU65fTjAaZijukhx3l8M4uYYpILPav0qvI5TVOGwfhQnnYu2YbRUGKWlMyakllSMs8pzSz_h_0z8f-py5UKAPCX0I1UhCn-CG5Mnt4
CODEN IJSTHZ
CitedBy_id crossref_primary_10_1016_j_jag_2022_103087
crossref_primary_10_1109_LGRS_2024_3365509
crossref_primary_10_3390_su141912563
crossref_primary_10_1109_TGRS_2023_3336546
crossref_primary_10_1016_j_jag_2022_102833
crossref_primary_10_1108_IJICC_07_2023_0198
crossref_primary_10_1109_ACCESS_2023_3335382
crossref_primary_10_1109_JSTARS_2022_3155665
crossref_primary_10_3390_rs16020278
crossref_primary_10_1109_ACCESS_2023_3264502
crossref_primary_10_1016_j_jag_2024_103853
crossref_primary_10_1038_s41598_024_63363_7
crossref_primary_10_1002_cpe_7733
crossref_primary_10_1109_TGRS_2023_3332336
crossref_primary_10_1109_JSTARS_2024_3354455
crossref_primary_10_3390_rs14133216
crossref_primary_10_1016_j_scitotenv_2024_173273
crossref_primary_10_1109_TGRS_2024_3370556
crossref_primary_10_3390_rs15061552
crossref_primary_10_1109_TGRS_2024_3424489
crossref_primary_10_1109_TCSVT_2024_3391018
crossref_primary_10_1109_TIP_2025_3533205
crossref_primary_10_1016_j_compeleceng_2023_108672
crossref_primary_10_3390_rs15030831
crossref_primary_10_1016_j_neucom_2024_129248
crossref_primary_10_1109_TGRS_2023_3266838
crossref_primary_10_1111_phor_12489
crossref_primary_10_1049_ipr2_13107
crossref_primary_10_1109_TGRS_2023_3265346
crossref_primary_10_1109_TGRS_2022_3201755
crossref_primary_10_3390_su141912465
crossref_primary_10_1109_JSTARS_2023_3316302
crossref_primary_10_3934_nhm_2023070
crossref_primary_10_1080_10106049_2022_2143912
crossref_primary_10_1016_j_eswa_2025_126823
crossref_primary_10_1109_TGRS_2023_3313800
crossref_primary_10_1109_JSTARS_2024_3355290
crossref_primary_10_1038_s41598_025_89735_1
crossref_primary_10_1093_bib_bbad131
crossref_primary_10_1109_TGRS_2023_3313769
crossref_primary_10_3390_rs17071131
crossref_primary_10_1109_TBDATA_2023_3327220
crossref_primary_10_1109_TGRS_2024_3496854
crossref_primary_10_1080_01431161_2023_2204200
crossref_primary_10_1080_17538947_2024_2432522
crossref_primary_10_1109_JSTARS_2024_3352578
crossref_primary_10_1007_s11600_023_01057_w
crossref_primary_10_1016_j_cja_2023_12_012
crossref_primary_10_1109_LRA_2025_3529325
crossref_primary_10_1016_j_rse_2024_114497
crossref_primary_10_1109_TGRS_2024_3438942
crossref_primary_10_11834_jig_230653
crossref_primary_10_1109_JSTARS_2024_3444773
crossref_primary_10_1016_j_engappai_2024_107909
crossref_primary_10_1109_JSTARS_2023_3285389
crossref_primary_10_14358_PERS_23_00067R2
crossref_primary_10_3390_rs16091532
crossref_primary_10_1007_s00371_022_02666_0
crossref_primary_10_1109_JSTARS_2023_3265677
crossref_primary_10_1109_JSTARS_2024_3502461
crossref_primary_10_1109_LGRS_2023_3266008
crossref_primary_10_1016_j_inffus_2022_12_025
crossref_primary_10_1016_j_jag_2024_104082
crossref_primary_10_1109_TGRS_2024_3363057
crossref_primary_10_3390_rs16213987
crossref_primary_10_1007_s00371_023_03112_5
crossref_primary_10_1177_14759217241302370
crossref_primary_10_1109_TGRS_2023_3241331
crossref_primary_10_1016_j_rsase_2024_101167
Cites_doi 10.1109/TGRS.2021.3099033
10.1109/JSTARS.2020.3005403
10.3390/rs10010023
10.1109/TGRS.2015.2488681
10.1109/CVPR.2018.00418
10.3390/rs10050734
10.1080/22797254.2020.1868273
10.1109/JSTARS.2017.2761800
10.1016/j.patcog.2012.07.017
10.1080/2150704X.2017.1415477
10.1109/CVPR.2018.00813
10.1109/CVPR.2016.90
10.3390/rs13030433
10.3390/rs10020290
10.1016/j.patcog.2016.07.001
10.1109/JPROC.2017.2675998
10.1109/TGRS.2020.3033336
10.1109/JSTARS.2017.2683799
10.1109/TGRS.2017.2685945
10.1016/j.isprsjprs.2020.11.025
10.1109/ACCESS.2018.2798799
10.1016/S1872-2032(08)60029-3
10.1109/TGRS.2015.2435801
10.1109/TGRS.2017.2783902
10.3390/rs11050494
10.1109/TGRS.2014.2351395
10.1109/JSTARS.2015.2444405
10.1109/ICCVW.2019.00246
10.3390/rs10060934
10.1016/j.isprsjprs.2018.01.023
10.1109/TGRS.2017.2711275
10.3390/rs10030444
10.1080/01431161.2014.890762
10.1109/ACCESS.2020.3022882
10.1109/JSTARS.2021.3058691
10.1109/LGRS.2017.2779469
10.1109/TNNLS.2021.3071369
10.1109/LGRS.2014.2357392
10.1109/ACCESS.2019.2918732
10.1109/JSTARS.2020.3009352
10.1080/01431161.2016.1171928
10.1109/TGRS.2017.2700322
10.3390/rs10030410
10.1109/LGRS.2017.2786241
10.1080/17538947.2014.925517
10.3390/rs10071158
10.1016/j.cviu.2019.04.004
10.1109/CVPR.2017.243
10.1080/2150704X.2020.1864052
10.1109/TGRS.2017.2702596
10.3390/rs71114680
10.1109/FSKD.2015.7382030
10.1109/LGRS.2017.2731997
10.1109/JSTARS.2014.2339842
10.1109/JSTARS.2021.3051569
10.1109/IGARSS.2019.8900199
10.1007/s00521-020-05071-7
10.1109/JSTARS.2012.2228254
10.1109/TGRS.2013.2265322
10.1109/TPAMI.2002.1017623
10.1109/ICIP.2008.4712139
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
7UA
8FD
C1K
F1W
FR3
H8D
H96
KR7
L.G
L7M
DOA
DOI 10.1109/JSTARS.2022.3141826
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE Xplore Open Access Journals
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Water Resources Abstracts
Technology Research Database
Environmental Sciences and Pollution Management
ASFA: Aquatic Sciences and Fisheries Abstracts
Engineering Research Database
Aerospace Database
Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources
Civil Engineering Abstracts
Aquatic Science & Fisheries Abstracts (ASFA) Professional
Advanced Technologies Database with Aerospace
DOAJ Open Access Full Text
DatabaseTitle CrossRef
Aerospace Database
Civil Engineering Abstracts
Aquatic Science & Fisheries Abstracts (ASFA) Professional
Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources
Technology Research Database
ASFA: Aquatic Sciences and Fisheries Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
Water Resources Abstracts
Environmental Sciences and Pollution Management
DatabaseTitleList
Aerospace Database

Database_xml – sequence: 1
  dbid: DOA
  name: WRHA-DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Geology
EISSN 2151-1535
EndPage 1162
ExternalDocumentID oai_doaj_org_article_bec92b7db0fb426080b08072628066dc
10_1109_JSTARS_2022_3141826
9678028
Genre orig-research
GrantInformation_xml – fundername: Fundamental Research Funds for the Natural Science Foundation of China
  grantid: U1803117; 41925007; 42071430
GroupedDBID 0R~
29I
4.4
5GY
5VS
6IK
97E
AAFWJ
AAJGR
AASAJ
AAWTH
ABAZT
ABVLG
ACIWK
AENEX
AETIX
AFPKN
AFRAH
AGSQL
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
DU5
EBS
EJD
ESBDL
GROUPED_DOAJ
HZ~
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
OK1
RIA
RIE
RNS
AAYXX
CITATION
RIG
7UA
8FD
C1K
F1W
FR3
H8D
H96
KR7
L.G
L7M
ID FETCH-LOGICAL-c408t-3d3dde8b83e545a21ef557a973bc72a0661c7cb7b1c4596f973b56538a2291803
IEDL.DBID RIE
ISSN 1939-1404
IngestDate Wed Aug 27 01:26:05 EDT 2025
Mon Jul 28 14:10:44 EDT 2025
Tue Jul 01 03:16:20 EDT 2025
Thu Apr 24 23:09:18 EDT 2025
Wed Aug 27 02:21:37 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://creativecommons.org/licenses/by/4.0/legalcode
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c408t-3d3dde8b83e545a21ef557a973bc72a0661c7cb7b1c4596f973b56538a2291803
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-4737-4205
0000-0001-7785-2541
0000-0003-2873-7584
0000-0002-6272-1618
0000-0003-2766-0845
OpenAccessLink https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/document/9678028
PQID 2622827359
PQPubID 75722
PageCount 13
ParticipantIDs doaj_primary_oai_doaj_org_article_bec92b7db0fb426080b08072628066dc
crossref_citationtrail_10_1109_JSTARS_2022_3141826
ieee_primary_9678028
proquest_journals_2622827359
crossref_primary_10_1109_JSTARS_2022_3141826
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220000
2022-00-00
20220101
2022-01-01
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – year: 2022
  text: 20220000
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE journal of selected topics in applied earth observations and remote sensing
PublicationTitleAbbrev JSTARS
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ref59
ref15
ref58
ref14
ref53
ref55
ref11
gao (ref46) 2016
müller (ref60) 2019
ref54
yi (ref52) 2010
ref10
ref17
ref16
ref19
krizhevsky (ref18) 2012; 25
ref51
ref50
ref45
ref48
ref47
ref41
yue (ref3) 2021; 54
ref49
ref8
ref9
ref4
ref6
ref5
ref40
ref35
simonyan (ref26) 2014
ref34
ref37
ref36
ref31
ref30
ref33
ref32
ref1
ref39
ref38
gong (ref7) 2017; 14
srivastava (ref44) 2015
ref70
zhang (ref42) 0
ref68
ref24
ref67
ref23
ref69
ref25
ref64
ref20
ref63
ref66
ref22
ref65
ref21
ref28
ref27
ref29
gong (ref2) 2017; 105
mariani (ref43) 2018
ref62
ref61
References_xml – ident: ref70
  doi: 10.1109/TGRS.2021.3099033
– ident: ref68
  doi: 10.1109/JSTARS.2020.3005403
– start-page: 2377
  year: 2015
  ident: ref44
  article-title: Training very deep networks
  publication-title: Proc Adv Neural Inf Process Syst
– year: 2014
  ident: ref26
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit
– ident: ref6
  doi: 10.3390/rs10010023
– ident: ref22
  doi: 10.1109/TGRS.2015.2488681
– ident: ref19
  doi: 10.1109/CVPR.2018.00418
– ident: ref59
  doi: 10.3390/rs10050734
– ident: ref66
  doi: 10.1080/22797254.2020.1868273
– start-page: 4694
  year: 2019
  ident: ref60
  article-title: When does label smoothing help?
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref61
  doi: 10.1109/JSTARS.2017.2761800
– ident: ref11
  doi: 10.1016/j.patcog.2012.07.017
– ident: ref5
  doi: 10.1080/2150704X.2017.1415477
– volume: 54
  start-page: 141
  year: 2021
  ident: ref3
  article-title: Remote sensing scene classification based on high-order graph convolutional network
  publication-title: Eur J Remote Sens
  doi: 10.1080/22797254.2020.1868273
– ident: ref48
  doi: 10.1109/CVPR.2018.00813
– ident: ref45
  doi: 10.1109/CVPR.2016.90
– ident: ref64
  doi: 10.3390/rs13030433
– start-page: 270
  year: 2010
  ident: ref52
  article-title: Bag-of-visual-words and spatial extensions for land-use classification
  publication-title: Proc Adv Geograph Inf Syst
– ident: ref38
  doi: 10.3390/rs10020290
– ident: ref37
  doi: 10.1016/j.patcog.2016.07.001
– volume: 105
  start-page: 1865
  year: 2017
  ident: ref2
  article-title: Remote sensing image scene classification: Benchmark and state of the art
  publication-title: Proc IEEE
  doi: 10.1109/JPROC.2017.2675998
– ident: ref69
  doi: 10.1109/TGRS.2020.3033336
– ident: ref54
  doi: 10.1109/JSTARS.2017.2683799
– ident: ref53
  doi: 10.1109/TGRS.2017.2685945
– ident: ref24
  doi: 10.1016/j.isprsjprs.2020.11.025
– ident: ref32
  doi: 10.1109/ACCESS.2018.2798799
– ident: ref55
  doi: 10.1016/S1872-2032(08)60029-3
– ident: ref1
  doi: 10.1109/TGRS.2015.2435801
– ident: ref25
  doi: 10.1109/TGRS.2017.2783902
– ident: ref23
  doi: 10.3390/rs11050494
– ident: ref14
  doi: 10.1109/TGRS.2014.2351395
– ident: ref15
  doi: 10.1109/JSTARS.2015.2444405
– ident: ref51
  doi: 10.1109/ICCVW.2019.00246
– ident: ref56
  doi: 10.3390/rs10060934
– ident: ref30
  doi: 10.1016/j.isprsjprs.2018.01.023
– ident: ref28
  doi: 10.1109/TGRS.2017.2711275
– volume: 25
  start-page: 1097
  year: 2012
  ident: ref18
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Adv Neural Inf Process Syst
– ident: ref36
  doi: 10.3390/rs10030444
– ident: ref12
  doi: 10.1080/01431161.2014.890762
– ident: ref34
  doi: 10.1109/ACCESS.2020.3022882
– ident: ref50
  doi: 10.1109/JSTARS.2021.3058691
– ident: ref33
  doi: 10.1109/LGRS.2017.2779469
– ident: ref67
  doi: 10.1109/TNNLS.2021.3071369
– ident: ref17
  doi: 10.1109/LGRS.2014.2357392
– ident: ref62
  doi: 10.1109/ACCESS.2019.2918732
– ident: ref41
  doi: 10.1109/JSTARS.2020.3009352
– ident: ref21
  doi: 10.1080/01431161.2016.1171928
– ident: ref27
  doi: 10.1109/TGRS.2017.2700322
– ident: ref58
  doi: 10.3390/rs10030410
– year: 2018
  ident: ref43
  article-title: BAGAN: Data augmentation with balancing GAN
  publication-title: Proc IEEE Conf Comput Vis Pattern Recognit
– ident: ref35
  doi: 10.1109/LGRS.2017.2786241
– ident: ref29
  doi: 10.1080/17538947.2014.925517
– ident: ref57
  doi: 10.3390/rs10071158
– ident: ref40
  doi: 10.1016/j.cviu.2019.04.004
– ident: ref47
  doi: 10.1109/CVPR.2017.243
– ident: ref49
  doi: 10.1080/2150704X.2020.1864052
– ident: ref31
  doi: 10.1109/TGRS.2017.2702596
– ident: ref8
  doi: 10.3390/rs71114680
– ident: ref16
  doi: 10.1109/FSKD.2015.7382030
– volume: 14
  start-page: 1735
  year: 2017
  ident: ref7
  article-title: Remote sensing image scene classification using bag of convolutional features
  publication-title: IEEE Geosci Remote Sens Lett
  doi: 10.1109/LGRS.2017.2731997
– ident: ref13
  doi: 10.1109/JSTARS.2014.2339842
– ident: ref65
  doi: 10.1109/JSTARS.2021.3051569
– ident: ref63
  doi: 10.1109/IGARSS.2019.8900199
– ident: ref39
  doi: 10.1007/s00521-020-05071-7
– ident: ref4
  doi: 10.1109/JSTARS.2012.2228254
– year: 0
  ident: ref42
  article-title: mixup: Beyond empirical risk minimization
  publication-title: Proc Int Conf Learn Representations
– ident: ref20
  doi: 10.1109/TGRS.2013.2265322
– ident: ref10
  doi: 10.1109/TPAMI.2002.1017623
– start-page: 646
  year: 2016
  ident: ref46
  article-title: Deep networks with stochastic depth
  publication-title: Proc Eur Conf Comput Vis
– ident: ref9
  doi: 10.1109/ICIP.2008.4712139
SSID ssj0062793
Score 2.5515766
Snippet Deep convolutional neural networks have become an indispensable method in remote sensing image scene classification because of their powerful feature...
SourceID doaj
proquest
crossref
ieee
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1150
SubjectTerms Artificial neural networks
Attention mechanism
Classification
Computer networks
Context
Convolutional neural networks
Data mining
Datasets
Deep learning
feature channel
Feature extraction
global context information
Image analysis
Image classification
Image enhancement
Machine learning
Mountains
Neural networks
Object recognition
Performance evaluation
Remote sensing
Scene classification
Smoothness
Spatial data
Spatial discrimination learning
SummonAdditionalLinks – databaseName: DOAJ Open Access Full Text
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1RS8MwEA4yEHwRdYrTKXnw0bI2aZvGtzndhuAeNgd7C02ayUDm0Ar6771LsqEI-uJDX5qkae-ud9-1x3eEXOi85CJmVSSFgQTFpFVUVLGNrMiMdBxjjoHvfpQPp-ndLJt9afWFNWGeHtgLrgN7SKZFpeO5Rjb1ItZwCJbjL8G8Muh9Ieatkynvg3MGZhc4hpJYdsDIu-MJZIOMQZKaIqb-FoccXX_or_LDKbtI098juwEi0q6_tX2yZZcHZHvgWvB-NMli0Jt0R7a-ol3qGfupo5h6ryn2F17g0rr2VYz0xtoVDRyqj3Tka74pAFU6tqAkSydYwA5DEwNOj7oWmVg85PR1SKb924feMAoNEyKTxkUd8YqDtyp0wS0Ao5Ildp5lopSCayNYCfJKjDBa6MSkmcznOACAjhclYzIpYn5EGsvnpT0mFGniU8A6eZmJVOpcFxaAEki6AkRkeNYibC0-ZQKbODa1eFIuq4il8jJXKHMVZN4il5tFK0-m8fv0a9TLZioyYbsTYB8q2If6yz5apIla3VxEQnQGTNUi7bWWVXhrXxUsggxU8Eye_MfWp2QHH8d_sGmTRv3yZs8AwtT63FnrJwLj58k
  priority: 102
  providerName: Directory of Open Access Journals
Title GCSANet: A Global Context Spatial Attention Deep Learning Network for Remote Sensing Scene Classification
URI https://ieeexplore.ieee.org/document/9678028
https://www.proquest.com/docview/2622827359
https://doaj.org/article/bec92b7db0fb426080b08072628066dc
Volume 15
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Na9wwEB2SQKGXJm1aum0SdOgx3siSbVm5bdImoZA9ZBvITVjybAkpm9B6Ie2v74ykXegHpQeDsSwj88ajN9L4DcA733TaSNUX1gQKUELVF20vsUBTBxs1xqIC3-W0ubiuPt7UNxtwuP4XBhFj8hmO-TTu5ff3YclLZUeWPCvNh5uwSYFb-ldr5XUbZaLALvERW7BkTFYYKqU9IhOfXM0oFlSKQtSKGfUvs1AU68_VVf5wyXGeOduGy9UIU3rJ3Xg5-HH48Zt44_--wg48y4RTTJKFPIcNXLyAJ-exoO_3Xbg9P51Npjgci4lI-v8iClY9DoKrFd9y12FIOZHiPeKDyIqsn8U0ZZALor3iCglyFDNOh6emWSAXKmLBTU5Fiui_hOuzD59OL4pcfqEIlWyHQveafF_rW41EszpV4ryuTWeN9sGojrhKGUzwxpehqm0z5waih7rtlLJlK_Ur2FrcL_A1CBadr4g5NV1tKusb3yLRLiVtT_wq6HoEagWHC1mbnEtkfHExRpHWJQwdY-gyhiM4XHd6SNIc_779hHFe38q62vEC4ePyZ-rIoq3ypvdy7lm7v5WeDqMa3oBu-jCCXcZ0_ZAM5wj2Vlbjsg_45qgTxbNG1_bN33u9hac8wLSgswdbw9cl7hPFGfxBXBo4iBb-E3aE9JA
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9QwEB2VIgSXFigVWwr4wLHZOnYcx9yWlnaB7h66rdSbFTteVIG2FWQl4NczY3tX4kOIQ6QojiMnbzJ-k4zfALxydSs1F11htMcAxVdd0XQ8FEErb6LGWFTgm0zr8WX1_kpdbcDBei1MCCEmn4Uh7cZ_-d2NX9KnskODnhXnwztwF-d9VabVWiu_WwsdJXaRkZiCRGOyxlDJzSEa-eh8htGgEBikVsSpf5mHolx_rq_yh1OOM83JNkxWY0wJJp-Gy94N_Y_f5Bv_9yYewlamnGyUbOQRbITFY7h3Gkv6ft-B69Oj2Wga-tdsxFIFABYlq771jOoVX1PXvk9Zkew4hFuWNVk_smnKIWdIfNl5QNADm1FCPDbNPDpRFktuUjJSxP8JXJ68vTgaF7kAQ-Er3vSF7CR6v8Y1MuADb0UZ5krp1mjpvBYtspXSa--0K32lTD2nBiSIsmmFMGXD5S5sLm4W4Skwkp2vkDvVrdKVcbVrAhIvwU2HDMtLNQCxgsP6rE5ORTI-2xilcGMThpYwtBnDARysO90mcY5_n_6GcF6fSsra8QDiY_OLatGmjXC6c3zuSL2_4Q43LWr6BV13fgA7hOn6IhnOAeyvrMZmL_DVYieMaLVUZu_vvV7C_fHF5MyevZt-eAYPaLDp884-bPZfluE5Ep7evYh2_hObBvbk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GCSANet%3A+A+Global+Context+Spatial+Attention+Deep+Learning+Network+for+Remote+Sensing+Scene+Classification&rft.jtitle=IEEE+journal+of+selected+topics+in+applied+earth+observations+and+remote+sensing&rft.au=Chen%2C+Weitao&rft.au=Ouyang%2C+Shubing&rft.au=Tong%2C+Wei&rft.au=Li%2C+Xianju&rft.date=2022&rft.pub=IEEE&rft.issn=1939-1404&rft.volume=15&rft.spage=1150&rft.epage=1162&rft_id=info:doi/10.1109%2FJSTARS.2022.3141826&rft.externalDocID=9678028
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1939-1404&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1939-1404&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1939-1404&client=summon