Point-to-Pixel Prompting for Point Cloud Analysis With Pre-Trained Image Models

Nowadays, pre-training big models on large-scale datasets has achieved great success and dominated many downstream tasks in natural language processing and 2D vision, while pre-training in 3D vision is still under development. In this paper, we provide a new perspective of transferring the pre-train...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 46(2024), 6 vom: 16. Mai, Seite 4381-4397
1. Verfasser: Wang, Ziyi (VerfasserIn)
Weitere Verfasser: Rao, Yongming, Yu, Xumin, Zhou, Jie, Lu, Jiwen
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000caa a22002652 4500
001 NLM367179652
003 DE-627
005 20240508232302.0
007 cr uuu---uuuuu
008 240116s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3354961  |2 doi 
028 5 2 |a pubmed24n1401.xml 
035 |a (DE-627)NLM367179652 
035 |a (NLM)38227416 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Wang, Ziyi  |e verfasserin  |4 aut 
245 1 0 |a Point-to-Pixel Prompting for Point Cloud Analysis With Pre-Trained Image Models 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 07.05.2024 
500 |a published: Print-Electronic 
500 |a Citation Status PubMed-not-MEDLINE 
520 |a Nowadays, pre-training big models on large-scale datasets has achieved great success and dominated many downstream tasks in natural language processing and 2D vision, while pre-training in 3D vision is still under development. In this paper, we provide a new perspective of transferring the pre-trained knowledge from 2D domain to 3D domain with Point-to-Pixel Prompting in data space and Pixel-to-Point distillation in feature space, exploiting shared knowledge in images and point clouds that display the same visual world. Following the principle of prompting engineering, Point-to-Pixel Prompting transforms point clouds into colorful images with geometry-preserved projection and geometry-aware coloring. Then the pre-trained image models can be directly implemented for point cloud tasks without structural changes or weight modifications. With projection correspondence in feature space, Pixel-to-Point distillation further regards pre-trained image models as the teacher model and distills pre-trained 2D knowledge to student point cloud models, remarkably enhancing inference efficiency and model capacity for point cloud analysis. We conduct extensive experiments in both object classification and scene segmentation under various settings to demonstrate the superiority of our method. In object classification, we reveal the important scale-up trend of Point-to-Pixel Prompting and attain 90.3% accuracy on ScanObjectNN dataset, surpassing previous literature by a large margin. In scene-level semantic segmentation, our method outperforms traditional 3D analysis approaches and shows competitive capacity in dense prediction tasks 
650 4 |a Journal Article 
700 1 |a Rao, Yongming  |e verfasserin  |4 aut 
700 1 |a Yu, Xumin  |e verfasserin  |4 aut 
700 1 |a Zhou, Jie  |e verfasserin  |4 aut 
700 1 |a Lu, Jiwen  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 46(2024), 6 vom: 16. Mai, Seite 4381-4397  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:46  |g year:2024  |g number:6  |g day:16  |g month:05  |g pages:4381-4397 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3354961  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 46  |j 2024  |e 6  |b 16  |c 05  |h 4381-4397