Sentinel-2A Image Fusion Using a Machine Learning Approach

The multispectral instrument (MSI) carried by Sentinel-2A has 13 spectral bands with various spatial resolutions (i.e., four 10-m, six 20-m, and three 60-m bands). A wide range of applications requires a 10-m resolution for all spectral bands, including the 20- and 60-m bands. To achieve this requir...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 57; no. 12; pp. 9589 - 9601
Main Authors Wang, Jing, Huang, Bo, Zhang, Hankui K., Ma, Peifeng
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The multispectral instrument (MSI) carried by Sentinel-2A has 13 spectral bands with various spatial resolutions (i.e., four 10-m, six 20-m, and three 60-m bands). A wide range of applications requires a 10-m resolution for all spectral bands, including the 20- and 60-m bands. To achieve this requirement, previous studies used conventional pansharpening techniques, which require a simulated 10-m panchromatic (PAN) band from four 10-m bands [blue, green, red, and near infrared (NIR)]. The simulated PAN band may not have all the information from the original four bands and may have no spectral response function that overlaps the 20- or 60-m bands to be sharpened, which may degrade fusion quality. This paper presents a machine learning method that can directly use the information from multiple 10-m resolution bands for fusion. The method first learns the spectral relationship between the 20- or 60-m band to be sharpened and the selected 10-m bands degraded to 20 or 60 m using the support vector regression (SVR) model. The model is then applied to the selected 10-m bands to predict the 10-m-resolution version of the 20- or 60-m band. The image degradation process was tuned to closely match the Sentinel-2A MSI modulation transfer function (MTF). We applied our method to three data sets in Guangzhou, China, New South Wales, Australia, and St. Louis, USA, and achieved better fusion results than other commonly used pansharpening methods in terms of both visual and quantitative factors.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2019.2927766