Decouple Graph Neural Networks : Train Multiple Simple GNNs Simultaneously Instead of One

Graph neural networks (GNN) suffer from severe inefficiency due to the exponential growth of node dependency with the increase of layers. It extremely limits the application of stochastic optimization algorithms so that the training of GNN is usually time-consuming. To address this problem, we propo...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 11 vom: 01. Okt., Seite 7451-7462
1. Verfasser: Zhang, Hongyuan (VerfasserIn)
Weitere Verfasser: Zhu, Yanan, Li, Xuelong
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM371415411
003 DE-627
005 20241004232042.0
007 cr uuu---uuuuu
008 240424s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3392782  |2 doi 
028 5 2 |a pubmed24n1557.xml 
035 |a (DE-627)NLM371415411 
035 |a (NLM)38652618 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Zhang, Hongyuan  |e verfasserin  |4 aut 
245 1 0 |a Decouple Graph Neural Networks  |b Train Multiple Simple GNNs Simultaneously Instead of One 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 03.10.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Graph neural networks (GNN) suffer from severe inefficiency due to the exponential growth of node dependency with the increase of layers. It extremely limits the application of stochastic optimization algorithms so that the training of GNN is usually time-consuming. To address this problem, we propose to decouple a multi-layer GNN as multiple simple modules for more efficient training, which is comprised of classical forward training (FT) and designed backward training (BT). Under the proposed framework, each module can be trained efficiently in FT by stochastic algorithms without distortion of graph information owing to its simplicity. To avoid the only unidirectional information delivery of FT and sufficiently train shallow modules with the deeper ones, we develop a backward training mechanism that makes the former modules perceive the latter modules, inspired by the classical backward propagation algorithm. The backward training introduces the reversed information delivery into the decoupled modules as well as the forward information delivery. To investigate how the decoupling and greedy training affect the representational capacity, we theoretically prove that the error produced by linear modules will not accumulate on unsupervised tasks in most cases. The theoretical and experimental results show that the proposed framework is highly efficient with reasonable performance, which may deserve more investigation 
650 4 |a Journal Article 
700 1 |a Zhu, Yanan  |e verfasserin  |4 aut 
700 1 |a Li, Xuelong  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 11 vom: 01. Okt., Seite 7451-7462  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:11  |g day:01  |g month:10  |g pages:7451-7462 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3392782  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 11  |b 01  |c 10  |h 7451-7462