CS2DIPs : Unsupervised HSI Super-Resolution Using Coupled Spatial and Spectral DIPs
In recent years, fusing high spatial resolution multispectral images (HR-MSIs) and low spatial resolution hyperspectral images (LR-HSIs) has become a widely used approach for hyperspectral image super-resolution (HSI-SR). Various unsupervised HSI-SR methods based on deep image prior (DIP) have gaine...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 01., Seite 3090-3101 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | In recent years, fusing high spatial resolution multispectral images (HR-MSIs) and low spatial resolution hyperspectral images (LR-HSIs) has become a widely used approach for hyperspectral image super-resolution (HSI-SR). Various unsupervised HSI-SR methods based on deep image prior (DIP) have gained wide popularity thanks to no pre-training requirement. However, DIP-based methods often demonstrate mediocre performance in extracting latent information from the data. To resolve this performance deficiency, we propose a coupled spatial and spectral deep image priors (CS2DIPs) method for the fusion of an HR-MSI and an LR-HSI into an HR-HSI. Specifically, we integrate the nonnegative matrix-vector tensor factorization (NMVTF) into the DIP framework to jointly learn the abundance tensor and spectral feature matrix. The two coupled DIPs are designed to capture essential spatial and spectral features in parallel from the observed HR-MSI and LR-HSI, respectively, which are then used to guide the generation of the abundance tensor and spectral signature matrix for the fusion of the HSI-SR by mode-3 tensor product, meanwhile taking some inherent physical constraints into account. Free from any training data, the proposed CS2DIPs can effectively capture rich spatial and spectral information. As a result, it exhibits much superior performance and convergence speed over most existing DIP-based methods. Extensive experiments are provided to demonstrate its state-of-the-art overall performance including comparison with benchmark peer methods |
---|---|
Beschreibung: | Date Revised 03.05.2024 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2024.3390582 |