Fast-iTPN : Integrally Pre-Trained Transformer Pyramid Network with Token Migration

We propose integrally pre-trained transformer pyramid network (iTPN), towards jointly optimizing the network backbone and the neck, so that transfer gap between representation models and downstream tasks is minimal. iTPN is born with two elaborated designs: 1) The first pre-trained feature pyramid u...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - PP(2024) vom: 24. Juli
1. Verfasser: Tian, Yunjie (VerfasserIn)
Weitere Verfasser: Xie, Lingxi, Qiu, Jihao, Jiao, Jianbin, Wang, Yaowei, Tian, Qi, Ye, Qixiang
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2024
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article
LEADER 01000naa a22002652 4500
001 NLM375344063
003 DE-627
005 20240726233016.0
007 cr uuu---uuuuu
008 240726s2024 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2024.3429508  |2 doi 
028 5 2 |a pubmed24n1482.xml 
035 |a (DE-627)NLM375344063 
035 |a (NLM)39046859 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Tian, Yunjie  |e verfasserin  |4 aut 
245 1 0 |a Fast-iTPN  |b Integrally Pre-Trained Transformer Pyramid Network with Token Migration 
264 1 |c 2024 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Revised 24.07.2024 
500 |a published: Print-Electronic 
500 |a Citation Status Publisher 
520 |a We propose integrally pre-trained transformer pyramid network (iTPN), towards jointly optimizing the network backbone and the neck, so that transfer gap between representation models and downstream tasks is minimal. iTPN is born with two elaborated designs: 1) The first pre-trained feature pyramid upon vision transformer (ViT). 2) Multi-stage supervision to the feature pyramid using masked feature modeling (MFM) . iTPN is updated to Fast-iTPN, reducing computational memory overhead and accelerating inference through two flexible designs. 1) Token migration: dropping redundant tokens of the backbone while replenishing them in the feature pyramid without attention operations. 2) Token gathering: reducing computation cost caused by global attention by introducing few gathering tokens. The base/large-level Fast-iTPN achieve 88.75%/89.5% top-1 accuracy on ImageNet-1K. With 1× training schedule using DINO, the base/large-level Fast-iTPN achieves 58.4%/58.8% box AP on COCO object detection, and a 57.5%/58.7% mIoU on ADE20K semantic segmentation using MaskDINO. Fast-iTPN can accelerate the inference procedure by up to 70%, with negligible performance loss, demonstrating the potential to be a powerful backbone for downstream vision tasks. The code is available at github.com/sunsmarterjie/iTPN 
650 4 |a Journal Article 
700 1 |a Xie, Lingxi  |e verfasserin  |4 aut 
700 1 |a Qiu, Jihao  |e verfasserin  |4 aut 
700 1 |a Jiao, Jianbin  |e verfasserin  |4 aut 
700 1 |a Wang, Yaowei  |e verfasserin  |4 aut 
700 1 |a Tian, Qi  |e verfasserin  |4 aut 
700 1 |a Ye, Qixiang  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g PP(2024) vom: 24. Juli  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnns 
773 1 8 |g volume:PP  |g year:2024  |g day:24  |g month:07 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2024.3429508  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d PP  |j 2024  |b 24  |c 07