|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM371457351 |
003 |
DE-627 |
005 |
20240503232630.0 |
007 |
cr uuu---uuuuu |
008 |
240426s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TIP.2024.3390582
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1396.xml
|
035 |
|
|
|a (DE-627)NLM371457351
|
035 |
|
|
|a (NLM)38656842
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Fang, Yuan
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a CS2DIPs
|b Unsupervised HSI Super-Resolution Using Coupled Spatial and Spectral DIPs
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 03.05.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a In recent years, fusing high spatial resolution multispectral images (HR-MSIs) and low spatial resolution hyperspectral images (LR-HSIs) has become a widely used approach for hyperspectral image super-resolution (HSI-SR). Various unsupervised HSI-SR methods based on deep image prior (DIP) have gained wide popularity thanks to no pre-training requirement. However, DIP-based methods often demonstrate mediocre performance in extracting latent information from the data. To resolve this performance deficiency, we propose a coupled spatial and spectral deep image priors (CS2DIPs) method for the fusion of an HR-MSI and an LR-HSI into an HR-HSI. Specifically, we integrate the nonnegative matrix-vector tensor factorization (NMVTF) into the DIP framework to jointly learn the abundance tensor and spectral feature matrix. The two coupled DIPs are designed to capture essential spatial and spectral features in parallel from the observed HR-MSI and LR-HSI, respectively, which are then used to guide the generation of the abundance tensor and spectral signature matrix for the fusion of the HSI-SR by mode-3 tensor product, meanwhile taking some inherent physical constraints into account. Free from any training data, the proposed CS2DIPs can effectively capture rich spatial and spectral information. As a result, it exhibits much superior performance and convergence speed over most existing DIP-based methods. Extensive experiments are provided to demonstrate its state-of-the-art overall performance including comparison with benchmark peer methods
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Liu, Yipeng
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Chi, Chong-Yung
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Long, Zhen
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhu, Ce
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
|d 1992
|g 33(2024) vom: 01., Seite 3090-3101
|w (DE-627)NLM09821456X
|x 1941-0042
|7 nnns
|
773 |
1 |
8 |
|g volume:33
|g year:2024
|g day:01
|g pages:3090-3101
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TIP.2024.3390582
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 33
|j 2024
|b 01
|h 3090-3101
|