High-Quality Fusion and Visualization for MR-PET Brain Tumor Images via Multi-Dimensional Features
The fusion of magnetic resonance imaging and positron emission tomography can combine biological anatomical information and physiological metabolic information, which is of great significance for the clinical diagnosis and localization of lesions. In this paper, we propose a novel adaptive linear fu...
Veröffentlicht in: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. - 1992. - 33(2024) vom: 01., Seite 3550-3563 |
---|---|
1. Verfasser: | |
Weitere Verfasser: | , , , , , |
Format: | Online-Aufsatz |
Sprache: | English |
Veröffentlicht: |
2024
|
Zugriff auf das übergeordnete Werk: | IEEE transactions on image processing : a publication of the IEEE Signal Processing Society |
Schlagworte: | Journal Article |
Zusammenfassung: | The fusion of magnetic resonance imaging and positron emission tomography can combine biological anatomical information and physiological metabolic information, which is of great significance for the clinical diagnosis and localization of lesions. In this paper, we propose a novel adaptive linear fusion method for multi-dimensional features of brain magnetic resonance and positron emission tomography images based on a convolutional neural network, termed as MdAFuse. First, in the feature extraction stage, three-dimensional feature extraction modules are constructed to extract coarse, fine, and multi-scale information features from the source image. Second, at the fusion stage, the affine mapping function of multi-dimensional features is established to maintain a constant geometric relationship between the features, which can effectively utilize structural information from a feature map to achieve a better reconstruction effect. Furthermore, our MdAFuse comprises a key feature visualization enhancement algorithm designed to observe the dynamic growth of brain lesions, which can facilitate the early diagnosis and treatment of brain tumors. Extensive experimental results demonstrate that our method is superior to existing fusion methods in terms of visual perception and nine kinds of objective image fusion metrics. Specifically, in the results of MR-PET fusion, the SSIM (Structural Similarity) and VIF (Visual Information Fidelity) metrics show improvements of 5.61% and 13.76%, respectively, compared to the current state-of-the-art algorithm. Our project is publicly available at: https://github.com/22385wjy/MdAFuse |
---|---|
Beschreibung: | Date Completed 04.06.2024 Date Revised 05.06.2024 published: Print-Electronic Citation Status MEDLINE |
ISSN: | 1941-0042 |
DOI: | 10.1109/TIP.2024.3404660 |