Shared Growth of Graph Neural Networks Via Prompted Free-Direction Knowledge Distillation

Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN. However, it is often quite challenging to train a satisfactory deeper GNN due to...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2025) vom: 18. Feb.
1. Verfasser: Feng, Kaituo (VerfasserIn)
Weitere Verfasser: Miao, Yikun, Li, Changsheng, Yuan, Ye, Wang, Guoren
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2025
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article