Self-Similarity Constrained Sparse Representation for Hyperspectral Image Super-Resolution
Fusing a low-resolution hyperspectral image with the corresponding high-resolution multispectral image to obtain a high-resolution hyperspectral image is an important technique for capturing comprehensive scene information in both spatial and spectral domains. Existing approaches adopt sparsity prom...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - (2018) vom: 12. Juli |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2018
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | Fusing a low-resolution hyperspectral image with the corresponding high-resolution multispectral image to obtain a high-resolution hyperspectral image is an important technique for capturing comprehensive scene information in both spatial and spectral domains. Existing approaches adopt sparsity promoting strategy, and encode the spectral information of each pixel independently, which results in noisy sparse representation. We propose a novel hyperspectral image super-resolution method via a self-similarity constrained sparse representation. We explore the similar patch structures across the whole image and the pixels with close appearance in local regions to create globalstructure groups and local-spectral super-pixels. By forcing the similarity of the sparse representations for pixels belonging to the same group and super-pixel, we alleviate the effect of the outliers in the learned sparse coding. Experiment results on benchmark datasets validate that the proposed method outperforms the stateof- the-art methods in both quantitative metrics and visual effect |
---|---|
Beschreibung: | Date Revised 27.02.2024 published: Print-Electronic Citation Status Publisher |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2018.2855418 |