A GPU-Based Processing Chain for Linearly Unmixing Hyperspectral Images

Linear spectral unmixing is one of the nowadays hottest research topics within the hyperspectral imaging community, being a proof of this fact the vast amount of papers that can be found in the scientific literature about this challenging task. A subset of these works is devoted to the acceleration...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of selected topics in applied earth observations and remote sensing Vol. 10; no. 3; pp. 818 - 834
Main Authors Martel, Ernestina, Guerra, Raul, Lopez, Sebastian, Sarmiento, Roberto
Format Journal Article
LanguageEnglish
Published IEEE 01.03.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Linear spectral unmixing is one of the nowadays hottest research topics within the hyperspectral imaging community, being a proof of this fact the vast amount of papers that can be found in the scientific literature about this challenging task. A subset of these works is devoted to the acceleration of previously published unmixing algorithms for application under tight time constraints. For this purpose, hyperspectral unmixing algorithms are typically implemented onto high-performance computing architectures in which the operations involved are executed in parallel, which conducts to a reduction in the time required for unmixing a given hyperspectral image with respect to the sequential version of these algorithms. The speedup factors that can be achieved by means of these high-performance computing platforms heavily depend on the inherent level of parallelism of the algorithms to be executed onto them. However, the majority of the state-of-the-art unmixing algorithms were not originally conceived for being parallelized in an ulterior stage, which clearly restricts the amount of acceleration that can be reached. As far as advanced hyperspectral sensors have increasingly high spatial, spectral, and temporal resolutions, it is hence mandatory to follow a new approach that consists of developing a new class of highly parallel unmixing solutions that can take full advantage of the characteristics of nowadays high-performance computing architectures. This paper represents a step forward toward this direction as it proposes a new parallel algorithm for fully unmixing a hyperspectral image together with its implementation onto two different NVIDIA graphic processing units (GPUs). The results obtained reveal that our proposal is able to unmix hyperspectral images with very different spatial patterns and size better and much faster than the best GPU-based unmixing chains up-to-date published, with independence of the characteristics of the selected GPU.
ISSN:1939-1404
DOI:10.1109/JSTARS.2016.2614842