|
|
|
|
LEADER |
01000caa a22002652 4500 |
001 |
NLM365815543 |
003 |
DE-627 |
005 |
20240408232154.0 |
007 |
cr uuu---uuuuu |
008 |
231226s2024 xx |||||o 00| ||eng c |
024 |
7 |
|
|a 10.1109/TPAMI.2023.3342184
|2 doi
|
028 |
5 |
2 |
|a pubmed24n1369.xml
|
035 |
|
|
|a (DE-627)NLM365815543
|
035 |
|
|
|a (NLM)38090829
|
040 |
|
|
|a DE-627
|b ger
|c DE-627
|e rakwb
|
041 |
|
|
|a eng
|
100 |
1 |
|
|a Wang, Kun
|e verfasserin
|4 aut
|
245 |
1 |
0 |
|a Brave the Wind and the Waves
|b Discovering Robust and Generalizable Graph Lottery Tickets
|
264 |
|
1 |
|c 2024
|
336 |
|
|
|a Text
|b txt
|2 rdacontent
|
337 |
|
|
|a ƒaComputermedien
|b c
|2 rdamedia
|
338 |
|
|
|a ƒa Online-Ressource
|b cr
|2 rdacarrier
|
500 |
|
|
|a Date Revised 08.04.2024
|
500 |
|
|
|a published: Print-Electronic
|
500 |
|
|
|a Citation Status PubMed-not-MEDLINE
|
520 |
|
|
|a The training and inference of Graph Neural Networks (GNNs) are costly when scaling up to large-scale graphs. Graph Lottery Ticket (GLT) has presented the first attempt to accelerate GNN inference on large-scale graphs by jointly pruning the graph structure and the model weights. Though promising, GLT encounters robustness and generalization issues when deployed in real-world scenarios, which are also long-standing and critical problems in deep learning ideology. In real-world scenarios, the distribution of unseen test data is typically diverse. We attribute the failures on out-of-distribution (OOD) data to the incapability of discerning causal patterns, which remain stable amidst distribution shifts. In traditional spase graph learning, the model performance deteriorates dramatically as the graph/network sparsity exceeds a certain high level. Worse still, the pruned GNNs are hard to generalize to unseen graph data due to limited training set at hand. To tackle these issues, we propose the Resilient Graph Lottery Ticket (RGLT) to find more robust and generalizable GLT in GNNs. Concretely, we reactivate a fraction of weights/edges by instantaneous gradient information at each pruning point. After sufficient pruning, we conduct environmental interventions to extrapolate potential test distribution. Finally, we perform last several rounds of model averages to further improve generalization. We provide multiple examples and theoretical analyses that underpin the universality and reliability of our proposal. Further, RGLT has been experimentally verified across various independent identically distributed (IID) and out-of-distribution (OOD) graph benchmarks
|
650 |
|
4 |
|a Journal Article
|
700 |
1 |
|
|a Liang, Yuxuan
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Xinglin
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Li, Guohao
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Ghanem, Bernard
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zimmermann, Roger
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhou, Zhengyang
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Yi, Huahui
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Zhang, Yudong
|e verfasserin
|4 aut
|
700 |
1 |
|
|a Wang, Yang
|e verfasserin
|4 aut
|
773 |
0 |
8 |
|i Enthalten in
|t IEEE transactions on pattern analysis and machine intelligence
|d 1979
|g 46(2024), 5 vom: 05. Apr., Seite 3388-3405
|w (DE-627)NLM098212257
|x 1939-3539
|7 nnns
|
773 |
1 |
8 |
|g volume:46
|g year:2024
|g number:5
|g day:05
|g month:04
|g pages:3388-3405
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1109/TPAMI.2023.3342184
|3 Volltext
|
912 |
|
|
|a GBV_USEFLAG_A
|
912 |
|
|
|a SYSFLAG_A
|
912 |
|
|
|a GBV_NLM
|
912 |
|
|
|a GBV_ILN_350
|
951 |
|
|
|a AR
|
952 |
|
|
|d 46
|j 2024
|e 5
|b 05
|c 04
|h 3388-3405
|