Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias

Node representation learning on attributed graphs-whose nodes are associated with rich attributes (e.g., texts and protein sequences)-plays a crucial role in many important downstream tasks. To encode the attributes and graph structures simultaneously, recent studies integrate pre-trained models wit...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2024) vom: 12. Sept.
1. Verfasser: Shi, Zhihao (VerfasserIn)
Weitere Verfasser: Wang, Jie, Lu, Fanghua, Chen, Hanzhu, Lian, Defu, Wang, Zheng, Ye, Jieping, Wu, Feng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM377518352
003 DE-627
005 20240913233025.0
007 cr uuu---uuuuu
008 240913s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3459408  |2 doi 
028 5 2 |a pubmed24n1532.xml 
035 |a (DE-627)NLM377518352 
035 |a (NLM)39264793 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Shi, Zhihao  |e verfasserin  |4 aut 
245 1 0 |a Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 13.09.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a Node representation learning on attributed graphs-whose nodes are associated with rich attributes (e.g., texts and protein sequences)-plays a crucial role in many important downstream tasks. To encode the attributes and graph structures simultaneously, recent studies integrate pre-trained models with graph neural networks (GNNs), where pre-trained models serve as node encoders (NEs) to encode the attributes. As jointly training large NEs and GNNs on large-scale graphs suffers from severe scalability issues, many methods propose to train NEs and GNNs separately. Consequently, they do not take feature convolutions in GNNs into consideration in the training phase of NEs, leading to a significant learning bias relative to the joint training. To address this challenge, we propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs. The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias. More importantly, we show that LD converges to the optimal objective function values by the joint training under mild assumptions. Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets 
650 4 |a Journal Article 
700 1 |a Wang, Jie  |e verfasserin  |4 aut 
700 1 |a Lu, Fanghua  |e verfasserin  |4 aut 
700 1 |a Chen, Hanzhu  |e verfasserin  |4 aut 
700 1 |a Lian, Defu  |e verfasserin  |4 aut 
700 1 |a Wang, Zheng  |e verfasserin  |4 aut 
700 1 |a Ye, Jieping  |e verfasserin  |4 aut 
700 1 |a Wu, Feng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g PP(2024) vom: 12. Sept.  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:12  |g month:09 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3459408  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 12  |c 09