Hypergraph Regularized Deep Autoencoder for Unsupervised Unmixing Hyperspectral Images

TP751; Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstructio...

Full description

Saved in:
Bibliographic Details
Published in东华大学学报(英文版) Vol. 40; no. 1; pp. 8 - 17
Main Authors ZHANG Zexing, YANG Bin
Format Journal Article
LanguageEnglish
Published School of Computer Science and Technology,Donghua University,Shanghai 201620,China 2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:TP751; Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L1/2 norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
ISSN:1672-5220
DOI:10.19884/j.1672-5220.202201002