Self-Supervised Masked Graph Autoencoder for Hyperspectral Anomaly Detection

Hyperspectral image anomaly detection faces the challenge of difficulty in annotating anomalous targets. Autoencoder (AE)-based methods are widely used due to their excellent image reconstruction capability. However, traditional grid-based image representation methods struggle to capture long-range...

Description complète

Détails bibliographiques
Publié dans:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - PP(2025) vom: 16. Okt.
Auteur principal: Tu, Bing (Auteur)
Autres auteurs: He, Baoliang, He, Yan, Zhou, Tao, Liu, Bo, Li, Jun, Plaza, Antonio
Format: Article en ligne
Langue:English
Publié: 2025
Accès à la collection:IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Sujets:Journal Article
Description
Résumé:Hyperspectral image anomaly detection faces the challenge of difficulty in annotating anomalous targets. Autoencoder (AE)-based methods are widely used due to their excellent image reconstruction capability. However, traditional grid-based image representation methods struggle to capture long-range dependencies and model non-Euclidean structures. To address these issues, this paper proposes a self-supervised Masked Graph AutoEncoder (MGAE) for hyperspectral anomaly detection. MGAE utilizes a Graph Attention Network (GAT) autoencoder to reconstruct the background of hyperspectral images and identifies anomalies by comparing the reconstructed features with the original features. Specifically, we constructs a topological graph structure of the hyperspectral image, which is then input into the GAT autoencoder for reconstruction, leveraging the multi-head attention mechanism to learn spatial and spectral features. To prevent the decoder from learning trivial solutions, we introduce a re-masking strategy that randomly masks both the input features and hidden representations during training, forcing the model to learn and reconstruct features under limited information, thereby improving detection performance. Additionally, the proposed loss function with graph Laplacian regularization (Twice Loss) minimizes variations in feature representations, leading to more consistent background reconstruction. Experimental results on several real-world hyperspectral datasets demonstrate that MGAE outperforms existing methods
Description:Date Revised 16.10.2025
published: Print-Electronic
Citation Status Publisher
ISSN:1941-0042
DOI:10.1109/TIP.2025.3620091